“Elegant and powerful new result that seriously undermines large language models”

Like I’ve been saying for a while now: LLMs do not think or reason. They are not on the path to AGI. They are extremely limited correlation and text synthesis machines. https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that

Elegant and powerful new result that seriously undermines large language models

Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:

Marcus on AI
@baldur Richard Feynman demonstrated that poorly educated grad students have the same problem.
@resuna @baldur can you say more about this and ideally link to a source or the story? I am interested .
@gjdavis @baldur It's in his autobiography "Surely You're Joking, Mr Feynman" when he was a visiting lecturer in Brazil.
@resuna @baldur Kind of proves the point, doesn't it? I don't want "poorly educated grad students" to write code, give medical advice, or do anything remotely critical.