The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
@schoolingdiana @intelwire @skry
When developing software you tend to get what you test for. If people are only using the Turing test to evaluate their AI software, they will end up with something that seems human but may not be accurate or fair.
@AdamDavis @schoolingdiana @intelwire True, which is one reason why the Turing test is no longer seriously considered. The other is that we've already seen AIs blow past that threshold.
@skry @AdamDavis @schoolingdiana I'm thinking of it as more of a cultural artifact than the thing in itself. The idea that success for a generative AI is a humanlike presentation and everything else is a minor detail that can be worked out later. i.e. the Yann LeCun attitude.
@intelwire @AdamDavis @skry @schoolingdiana “success… is a humanlike presentation and everything else is a minor detail that can be worked out later” sounds like a lot of political campaigns…

@AdamDavis @schoolingdiana @skry @intelwire

With that description of the Yann LeCun attitude, I suddenly have a Tom Lehrer lyric stuck in my head…

“If the rockets go up, who cares where they come down? That's not my department" says Wernher von Braun

@AdamDavis @schoolingdiana @intelwire @skry

Are you implying that humans are accurate and fair? Not the ones I meet.

@rrb @schoolingdiana @intelwire @skry No, they're not. But if you're building an AI to be a source of information or to improve an existing process, then accuracy and fairness are important.

@AdamDavis @schoolingdiana @intelwire @skry

Yes, but then you should not be using the Turing test for acceptance, right?

@rrb @schoolingdiana @intelwire @skry True. I'm not saying that people should be doing Turing-like tests, I'm saying that product leads are often more concerned with their AI systems appearing to be human than anything else.

I'm also suggesting that (in general) if you develop software and you don't test for certain features then you don't value those features.

@AdamDavis @schoolingdiana @intelwire @skry

Agreed. If you look at the failure modes, it always seems to affect people that would not be in the C-suites of companies.

Like facial recognition that works well on white/asian males, but finds that all dark skinned people look alike. Not to mention women.