“Elegant and powerful new result that seriously undermines large language models”

Like I’ve been saying for a while now: LLMs do not think or reason. They are not on the path to AGI. They are extremely limited correlation and text synthesis machines. https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that

Elegant and powerful new result that seriously undermines large language models

Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:

Marcus on AI

@baldur Interesting but of course "A is B" is not commutative (reversible) for all sorts of A and B. (A rose is pink; John is holding a gun.)

So even if LLMs could do this, it would still need to know when to do it.

@fishidwardrobe @baldur

And yet tech leaders seem to want to bet their businesses on #GenerativeAI systems that are unable to tell when given some fact "A is B", in which situations is it reasonable to deduce that "B is A"