The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire Was that the point of the test? I’m confused.
@intelwire @skry Why would they make it that way? Turing would not have approved.
Turing test - Wikipedia

@skry @schoolingdiana @intelwire Turing's original paper is about how behavioural testing is useless in determining intelligence. He never said, "use this test to determine if machines are intelligent". He meant the opposite: don't even bother, since you can never know if it is real intelligence or something pretending to be intelligent.

https://academic.oup.com/mind/article/LIX/236/433/986238

I.—COMPUTING MACHINERY AND INTELLIGENCE

I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definit

OUP Academic
@szakib
I have just been listening to "Alan Turing: Enigma" and the author seemed to be trying to paint his approach as pragmatism. Sort of you will never be able to tell if an AI is truly intelligent so if it is good enough that you can't decide then it is good enough that it should be considered intelligent.