“Language does not lead to intelligence; intelligence leads to language.”

Animals plan, adapt, and solve problems without words, and human infants do the same long before speech appears. Language emerges as a way to compress, coordinate, and transmit existing understanding.

We should not expect AGI to emerge from language, but from intelligence that can use language; so far, there is little evidence of such intelligence.

#agi #genAI

@mamund First define 'intelligence'. That's the real issue.
We have an 'intelligent behaviour' bias that means LLMs can fool us.
I agree, structurally LLMs CANNOT be independently intelligent. But they CAN replay the compressed traces of *human* intelligence well enough to appear intelligent.

@scottgal

yes, they appear to be ... that is our interpretation of the LLMs output.

there is no evidence that LLMs _understand_, their own output. there is quite a bit of evidence that they can mimic intelligence better than any other construct we've created so far.

of course, we're far from done finding the limits of LLMs but becoming intelligent in the way animials and children, etc. are intelligent is not on the roadmap for LLMs.

@mamund I honestly don't think we're as far from these limits as the frontier LLM companies pretend. Jan Le Cun for example agrees; LLMs CANNOT reach AGI and they're rapidly plateauing in capability. I suspect this year will see that become OBVIOUS.