LLMs can’t reason — they just crib reasoning-like steps from their training data

https://awful.systems/post/2610681

LLMs can’t reason — they just crib reasoning-like steps from their training data - awful.systems

When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

That’s why you can ask it. because it encodes semantics.

*guy who totally gets what these words mean* an llm simply encodes the semantics into the vectors
all you gotta do is, you know, ground the symbols, and as long as you’re writing enough Lisp that should be sufficient for GAI

both your comments made my eye twitch

like what’d happen if bob fucked up the symbols in a pentacle

also why do we need getaddrinfo? the promptfans will always readily tell you who they are