The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
Turing test - Wikipedia

@skry @schoolingdiana @intelwire Turing's original paper is about how behavioural testing is useless in determining intelligence. He never said, "use this test to determine if machines are intelligent". He meant the opposite: don't even bother, since you can never know if it is real intelligence or something pretending to be intelligent.

https://academic.oup.com/mind/article/LIX/236/433/986238

I.—COMPUTING MACHINERY AND INTELLIGENCE

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definit

OUP Academic
@szakib @skry @schoolingdiana To be clear, I'm not blaming Turing
@intelwire @skry @schoolingdiana I didn't think you were, this was more of a "yes, and".