Successfully plugged #cerebras #gptoss into the framework from the Recursive Language Models paper @ https://arxiv.org/abs/2512.24601
Getting ready to test it out on some long context problems. I've been thinking for a long time that a better way to handle context would be for the LLM to touch its content directly as little as possible. And that's what the paper authors did, putting everything in a REPL and manipulating context via variables. "Infinite context"!
