The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
Turing test - Wikipedia

@skry @schoolingdiana @intelwire Turing's original paper is about how behavioural testing is useless in determining intelligence. He never said, "use this test to determine if machines are intelligent". He meant the opposite: don't even bother, since you can never know if it is real intelligence or something pretending to be intelligent.

https://academic.oup.com/mind/article/LIX/236/433/986238

I.—COMPUTING MACHINERY AND INTELLIGENCE

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definit

OUP Academic

@szakib

Thank you for that link. In recent years I have often shared @intelwire's frustration over AI/ML's obsession with the Turing test & the predictably problematic results, so it's good to roll back to Turing's actual paper!

But I'm not sure your conclusion is more valid than the pop-culture one. He clearly argues that "Can machines think?" is functionally equivalent to "can they imitate human responses to questions?" and then proposes machine learning theory & urges its exploration.