The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
@schoolingdiana @intelwire @skry
When developing software you tend to get what you test for. If people are only using the Turing test to evaluate their AI software, they will end up with something that seems human but may not be accurate or fair.
@AdamDavis @schoolingdiana @intelwire True, which is one reason why the Turing test is no longer seriously considered. The other is that we've already seen AIs blow past that threshold.
@skry @AdamDavis @schoolingdiana I'm thinking of it as more of a cultural artifact than the thing in itself. The idea that success for a generative AI is a humanlike presentation and everything else is a minor detail that can be worked out later. i.e. the Yann LeCun attitude.

@AdamDavis @schoolingdiana @skry @intelwire

With that description of the Yann LeCun attitude, I suddenly have a Tom Lehrer lyric stuck in my head…

“If the rockets go up, who cares where they come down? That's not my department" says Wernher von Braun