don’t LLMs generally already fail at the learning stage of Intelligence?

once trained, they never learn again? It just sometimes seem like they are learning, as long as the learned thing is still within their “context window”, so basically it’s still within their prompt?

In another matter, how would we evaluate actual intelligence with LLMs? Especially remembering that all of the slop-companies would immediately try to cheat the test.

Depends on the setup and what you call learning. If you let them, bots can write down things to remember in future prompts, and edit those “memories”.

but these are still… prompt extensions (not sure if there is a technical word for it), right?

that’s a neat workaround for context windows, but at the core, imho any intelligence must be able to learn, and for a neural net to learn, it must change the network, i.e. weights or connections.

If a system is able to change their output or behavior to account for new information, has it not learned?

To add on, like humans kinda have a “context window” with short term memory vs long term memory its the integration of short and long that actually consitutes learning (in my laymen’s thought process).

And even then, humans forget shit all the time

No. Learning is changing behavior on past experience, not new information.
But… like… past experience only changes behaviour if it constitutes new information. If your past experience confirms your priors you won’t change behaviour.

I’m not seeing it as learning as behind the scenes the questions are changed, instead of the answer to the same question is becoming correct.

Also it becomes rather severely limited in the context length, or in this case in how much can be learned.