The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
@schoolingdiana @intelwire @skry
When developing software you tend to get what you test for. If people are only using the Turing test to evaluate their AI software, they will end up with something that seems human but may not be accurate or fair.

@AdamDavis @schoolingdiana @intelwire @skry

Are you implying that humans are accurate and fair? Not the ones I meet.

@rrb @schoolingdiana @intelwire @skry No, they're not. But if you're building an AI to be a source of information or to improve an existing process, then accuracy and fairness are important.

@AdamDavis @schoolingdiana @intelwire @skry

Yes, but then you should not be using the Turing test for acceptance, right?

@rrb @schoolingdiana @intelwire @skry True. I'm not saying that people should be doing Turing-like tests, I'm saying that product leads are often more concerned with their AI systems appearing to be human than anything else.

I'm also suggesting that (in general) if you develop software and you don't test for certain features then you don't value those features.

@AdamDavis @schoolingdiana @intelwire @skry

Agreed. If you look at the failure modes, it always seems to affect people that would not be in the C-suites of companies.

Like facial recognition that works well on white/asian males, but finds that all dark skinned people look alike. Not to mention women.