“Elegant and powerful new result that seriously undermines large language models”

Like I’ve been saying for a while now: LLMs do not think or reason. They are not on the path to AGI. They are extremely limited correlation and text synthesis machines. https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that

Elegant and powerful new result that seriously undermines large language models

Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:

Marcus on AI

@baldur
I'm not at all surprised!
They are stochastic parrots.
They are very good at language.
They are kinda like that bullsh1t con artist guy you briefly knew in college, who could convince anyone of anything, but didn't actually know anything themselves.

Super helpful for language understanding and interface, though!
They will still be useful in this niche.