“Elegant and powerful new result that seriously undermines large language models”

Like I’ve been saying for a while now: LLMs do not think or reason. They are not on the path to AGI. They are extremely limited correlation and text synthesis machines. https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that

Elegant and powerful new result that seriously undermines large language models

Wowed by a new paper I just read and wish I had thought to write myself. Lukas Berglund and others, led by Owain Evans, asked a simple, powerful, elegant question: can LLMs trained on A is B infer automatically that B is A? The shocking (yet, in historical context, see below, unsurprising) answer is no:

Marcus on AI
@baldur it's not a that plain argument since Tom cruise has siblings "Cruise has three sisters named Lee Anne, Marian, and Cass."(wikipedia). Name and Identity are not the same.
LLM's and Knowledge are definitely a problematic field but the test has flaws.

@Zeugs @baldur Having daughters and one famous son shouldn't have been what prevented a LLM from answering "Who is Mary Lee Pfeiffer's son?"

https://owainevans.github.io/reversal_curse.pdf