In retrospect, the flaw in the Turing Test was always using human discernment as the model to measure against.

LLMs easily pass the Turing Test, not because they’re sentient, but because we’re dumbasses who interpret the most dopamine-supplying response to ourselves as the most intelligently human.

@Catvalente I think the Turing Test was intended as a necessary condition required to demonstrate artificial intelligence, but not necessarily a sufficient condition.

To be fair, there are probably people who cannot pass a Turing Test.

@Jvmguy @Catvalente Someone recently put it as (roughly paraphrasing, because I can't remember their exact words) - "The thing that makes LLMs seem intelligent isn't the technology, it's peoples ability to see faces in toast".
@StryderNotavi @Jvmguy @Catvalente Yes! I've been making this same analogy (anthropomorphizing Markov models vs. pareidolia). They're just approximate information retrieval systems -- astonishingly good ones, but still just IR; they have no capacity for reasoning, even the so-called "reasoning" ones.
@wollman @StryderNotavi @Jvmguy @Catvalente
Brains—ours and everybody else’s—are pattern recognizing machines. LLMs are pattern creation machines. The problem with brains is they’re so thirsty for patterns they will create them out of thin air even—especially—if there’s nothing really there. We’ve created machines that feed us patterns that may or may not be there. Marriage made in hell.
@qurlyjoe @wollman @StryderNotavi @Jvmguy Foucault’s Pendulum told us