What LLMs and the Turing test¹ tell us: most of us not only are² stochastic parrots³ but are also fine with that – otherwise we would not happily use LLMs to produce all the output we communicate to others.

One could frame this as “insult to humanity” but I prefer to call it telling.

__
¹the Turing test does _not_ measure “intelligence”. I recommend to read the original paper: https://archive.org/details/MIND--COMPUTING-MACHINERY-AND-INTELLIGENCE
²if you want to phrase it more polite: “behave like”
³at least regarding social interactions

#AI #LLM #Turing_Test

I.—COMPUTING MACHINERY AND INTELLIGENCE : A.M Turing : Free Download, Borrow, and Streaming : Internet Archive

I.—COMPUTING MACHINERY ANDINTELLIGENCE1. The Imitation Game.I PROPOSE to consider the question, ' Can machines think ? 'This should begin with definitions of...

Internet Archive
Do the thinking models actually think? | ByteSauna

LLMs mimic understanding but think bottom-up, unlike humans. Explore why they’re more than autocomplete and why the future is human–AI collaboration, not replacement.

Do the thinking models actually think? | ByteSauna

LLMs mimic understanding but think bottom-up, unlike humans. Explore why they’re more than autocomplete and why the future is human–AI collaboration, not replacement.

3/ I strongly suspect that #LLM are just optimized to pass the #Turing_test with flying colours.
I did not notice it at once because the Turing test seems very general but these #stochasticparrots are really tuned for this task. This could explain why they are so useful for #scammers and utterly #useless for John Doe when he thinks that ChaptGPT is a kind of magical search engine.
Is the Turing Test Dead?

Researchers wonder whether improved large language models require new tests for machine intelligence

IEEE Spectrum
The Turing Trap: the Promise & Peril of Human-Like Artificial Intelligence
(2022) : Erik Brynjolfsson
DOI: https://doi.org/10.1162/daed_a_01915
#HLAI #ai #augmentation #augmented_intelligence #history #human_like_ai #philosophy #turing_test
#my_bibtex
Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence

Abstract. In 1950, Alan Turing proposed a test of whether a machine was intelligent: could a machine imitate a human so well that its answers to questions were indistinguishable from a human's? Ever since, creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly a better understanding of our own minds. But not all types of AI are human-like-in fact, many of the most powerful systems are very different from humans-and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What is more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policy-makers.

MIT Press

Advanced Robotics Forced Scientists to Invent a New Turing Test
https://futurism.com/the-byte/scientists-invented-new-turing-test

#Turing_Test

Advanced robotics forced scientists to invent a new Turing test

This is how we'll know we've finally left the uncanny valley.