The Turing Test poisoned the minds of generations of AI enthusiasts, because its criteria is producing text that persuades observers it was written by a human.

The result? Generative AI text products designed to "appear real" rather than produce accurate or ethical outputs.

It *should* be obvious why it's problematic to create a piece of software that excels at persuasion without concern for accuracy, honesty or ethics. But apparently it's not.

@intelwire

Turing was not interested in "proving" if machines could "think'

He simply postulated that If a human could have a significantly long conversation with a machine, without realizing it was a machine, then it was irrelevant whether or not the machine was actually "thinking" anymore then you know your neighbour is actually "thinking".

Turing called it "the imitation game" others called it the "Turing test"

@geekwisdom @intelwire Or, as Edsger Dijkstra put it, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
At some point, the question of whether machines are “thinking” becomes academic navel-gazing, when the practical question is whether machines can solve many problems that were traditionally believed to require human intelligence, and the answer is unequivocally “Yes”.
@geekwisdom @intelwire The problem becomes the danger of such specialized problem-solvers divorced from any understanding of the consequences of their actions, much less possessing any moral framework, deployed in ways able to effect real-world change. It falls to humans to act as the moral safeguards, but we are often direly lacking in that respect, eager to reap rewards now and weather consequences later.
@geekwisdom @intelwire I used to think Philosophical Zombies were an impossibility, that consciousness naturally emerges as a consequence of intelligence, but now we find ourselves staring alien beasts of our own creation straight in the face. Not strange minds, but something else entirely, clanking facades hiding a vast hollow nothingness, forcing us to confront that maybe our own sense of self is just an aberration and perhaps consciousness does not, in fact, convey any advantage.

@Mapache

@intelwire

I would argue humans have been making real world changes divorced from the consequences of their actions from the start and in that respect are much the same as any AI