My fellow journalists, I need you to stop talking about the Turing Test; it was never a good metric. Turing was a computer scientist, not a psychologist. ELIZA passed it, and also was a decent Person-centered therapy bot. Some people actually used it that way, even knowing it was a bot.

A) There are many paths to the top of a mountain, and

B) LLMs and other AIs aren't real girls and boys just because we feel they are.

@quinn

If you read Turing's paper, and think about it, you would realize two things.

One: What everyone thinks the Turing Test is has nothing to do with what Turing proposed

Two: Turing's test was actually subtle: it asks the computer to be empathetic enough with human men and woman to be able to perform as the man in a game where a man is trying to persuade someone he's a woman.

Turing was one of the smartest blokes of the 20th century, unlike our illiterate 21st century AI bros.

@djl So I have, but it has been a minute, and I do know about the gender part. (and how it's kind of a bit problematic by current standards, but you know, gay man of the time, honestly better than many, though still wildly misogynistic)

@quinn

I agree that it hasn't aged well.

And would need to be rethought for our current ideas about gender and identity.

Still, asking the computer to have empathy for people with different gender (or other) identities seems a rather good idea.

People are easily fooled. Human intelligence seems practically designed to be fooled, since anthropormophisation is such a powerful tool/heuristic for dealing with things that really aren't anywhere near human.

So we need something more subtle.

@djl I think asking computers to have empathy at all is a terrible decision, because it anthropomorphizes a Turing machine and that is always going to end in tears. and probably blood.

@quinn

I get that. But.

AI (done right) is a branch of cognitive science, and how empathy fits in with cognition is a valid question for cog. sci.

Programs informed by cognitive science might handle empathy sensibly. Might. Some day in a distant future when the current round of AI bros have been chased away. And we have figured out the stuff (basic cognition) we failed at so badly in the 70s and 80s.

Will we make the necessary progress in cog. sci. for those ideas? Maybe, maybe not.