The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire

Turing was not interested in "proving" if machines could "think'

He simply postulated that If a human could have a significantly long conversation with a machine, without realizing it was a machine, then it was irrelevant whether or not the machine was actually "thinking" anymore then you know your neighbour is actually "thinking".

Turing called it "the imitation game" others called it the "Turing test"

@geekwisdom I know, but I would argue a) it's become the rhetorical (if not actual) standard for assessing AI thinking, and b) he proposed it because it was too hard to figure out if a machine was actually thinking.
@intelwire @geekwisdom this reminds me all over again of that fear that someone or something will convince us that we can upload our consciousness onto something

and an entire civilization ends up being wiped out and replaced by p-zombies

@intelwire

The problem in my opinion is we can "see" the code that makes software work. When we don't know how to make a machine do something, we assume it must require human "intelligence". Then someone writes am algorithm that does it everyone looks at it and says "oh I guess it doesn't require intelligence after all.

@intelwire @geekwisdom The latter point actually presaged the notion of the “hard problem of consciousness”.