LLMs can’t reason — they just crib reasoning-like steps from their training data

https://awful.systems/post/2610681

LLMs can’t reason — they just crib reasoning-like steps from their training data - awful.systems

When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

That’s why you can ask it. because it encodes semantics.

thank you for bravely rushing in and providing yet another counterexample to the “but nobody’s actually stupid enough to think they’re anything more than statistical language generators” talking point