Its amazing that people keep "discovering" (and writing journal papers and books) that the GPT/LLMs are just big "predictive text" machines. A GPT by definition is a program that is just sophisticated, conditional mimicry. It is mostly designed to fool gullible humans. With a vast amount of input data, industrial scale mimicry can entertain and distract humans for hours. It's a parlour game.
A text generator might pass a Turing test but that's only to say it can fool a human to believe that the text generator is alive or responsive when it isn't. Turing's "artificial" intelligence test was never about machine awareness or actual consciousness, so any trick that can fool a human will do. It is about artifice, not real intelligence.
All GPT generator software uses lots of data to make its mimicry seem nuanced and comprehensive. A simpler program might produce responses that are too obviously plagiarised. That is, it's not really impressive to ask "write me a love song" and have a Beatles song come back as if it were an original answer. The trickery in the text programs is to jumble up different information (always plagiarised) in a way that it seems original. This jumbling eventually produces responses that are more obviously impossible for the interacting person to accept. This fails the Turing test. The less knowledgable a person is the less they notice when this happens.
An Atari chess program beats ChatGPT. The reason? A GPT mimics language not good chess moves. It doesn't reason about chess. Expecting a computer program that has less stored knowledge and actions about the chess game to beat one that does is foolish.
#ai #atari #chess #algorithms #turingtest #deception