LLMs can’t reason — they just crib reasoning-like steps from their training data
LLMs can’t reason — they just crib reasoning-like steps from their training data
When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.
That’s why you can ask it. because it encodes semantics.
Paraphrasing Neil Gaiman, LLMs don’t give you information; they give you information shaped sentences.
They don’t encode semantics. They encode the statistical likelihood that each token will follow a given sequence of tokens.
It was all lost long before the LLMs when people took random schizo opinions on Facebook as gospel.
We live in a post-truth world, and all things considered I’m not too fussed about LLMs being fallible on occasion when the average person is wrong far more.