don’t LLMs generally already fail at the learning stage of Intelligence?

once trained, they never learn again? It just sometimes seem like they are learning, as long as the learned thing is still within their “context window”, so basically it’s still within their prompt?

In another matter, how would we evaluate actual intelligence with LLMs? Especially remembering that all of the slop-companies would immediately try to cheat the test.

Depends on the setup and what you call learning. If you let them, bots can write down things to remember in future prompts, and edit those “memories”.

but these are still… prompt extensions (not sure if there is a technical word for it), right?

that’s a neat workaround for context windows, but at the core, imho any intelligence must be able to learn, and for a neural net to learn, it must change the network, i.e. weights or connections.

If a system is able to change their output or behavior to account for new information, has it not learned?
No. Learning is changing behavior on past experience, not new information.
But… like… past experience only changes behaviour if it constitutes new information. If your past experience confirms your priors you won’t change behaviour.