This essay is an utterly brilliant take on #AIhype. I'll put a few excerpts here, but you should definitely go read the whole thing:

https://karawynn.substack.com/p/language-is-a-poor-heuristic-for

>>

Language Is a Poor Heuristic for Intelligence

With the emergence of LLM “AI”, everyone will have to learn what many disabled people have always understood

Nine Lives

"Advances over the past year in the misnamed field of “artificial intelligence” have activated the inverse form of the heuristic that haunts so many disabled humans: most people see the language fluency exhibited by large language models (LLMs) like ChatGPT and erroneously assume that the computer possesses intelligent comprehension — that the program understands both what users say to it, and what it replies."

>>

"Not only are we not close to developing “artificial general intelligence”, we are not even far away from developing AGI, because we haven’t even found a path that could conceivably lead to AGI."

>>

"One thing that particularly seems to lead people astray is the way that ChatGPT gives the impression of “apologizing” in response to exterior challenges. OpenAI’s claim that ChatGPT will “admit its mistakes” is worded to suggest that the algorithm both understands that it has made an error and is in the active process of improving its understanding based on the dialogue in progress."

>>

@emilymbender the apologizing is such a weird trick. Because as an naive user, when you detect an error you will then have it apologize till you get an answer you agree with. So in a weird way the bot is just exploring the possible answer space till it finds an answer you agree with.
@Soyweiser @emilymbender And this is a conscious, sociopathic UI choice. The UI could show you different possible paths, with probabilities assigned to them, that would dispel the magic and make the tool more useful. But that's not what its owners want it to do. They want a machine for deception at scale.
@dalias @Soyweiser @emilymbender The problem is that LLMs don't just generate a few possibilities. They generate thousands of possibilities per-word and have no way of knowing which are the handful of possibilities that matter vs the many that are just rewordings.
@BernieDoesIt @Soyweiser @emilymbender Indeed, I'm not sure exactly how you'd make such an exploratory tool, but researching ways to do that would be a lot more productive (from a social benefit sense) than making automatic bs generation at scale.