Successfully plugged #cerebras #gptoss into the framework from the Recursive Language Models paper @ https://arxiv.org/abs/2512.24601

Getting ready to test it out on some long context problems. I've been thinking for a long time that a better way to handle context would be for the LLM to touch its content directly as little as possible. And that's what the paper authors did, putting everything in a REPL and manipulating context via variables. "Infinite context"!

#AI #ML #paper

Had a lot fun today trying different models with the RLM framework against OOLONG dataset problems. I didn't accomplish anything meaningful, but was a nerdy good time.

gpt-oss-120b handles the REPL quite well. with some tuning I think it could be great.

per friend: gpt-oss-20b struggles a little but still effective

I tried a slew of small models 200m - 1.5b, they can't hack it.

Oh lfm-2.5 1.5b managed to complete one short OOLONG! and I feel like functiongemma might be tuned to.

#AI