What Are These Movies?!? / Whistle (2025) Movie Review
#rlm #redlettermedia #WhatAreTheseMovies #WhistleMovie #beyondtheblackvoid

What Are These Movies?!? / Whistle (2025) Movie Review
#rlm #redlettermedia #WhatAreTheseMovies #WhistleMovie #beyondtheblackvoid

Star Wars Trivia!
#rlm #redlettermedia #StarWars #JackQuaid

RLMs prove that there is hidden unlockable fluid intelligence in LLMs, specifically on tasks that require genuine test-time reasoning, where no amount of memorisation helps, e.g. ARC-AGI-2
Couple of condensed points:
- REPL-as-environment pattern is general, not just long-context trick. RLM paper uses long context as the motivating use case for writing symbolic programs over the input prompt, but the pivotools / Symbolica papers show agentic coding (persistent REPL + iterative interaction + optional recursive self-calling) dramatically improves reasoning on short-context tasks too
- RLM is a third scaling axis alongside Chain-of-Thought and tool calling, where RLM controls behaviour for context management.
- RLM trajectories are a trainable objective via RL, improvements shown when fine tuning for RLM trajectories
- Underlying mechanism for grounding LLM reasoning in concrete code execution and feedback has broad applicability.
- recursive delegation provides real additional gain
- interleaved thinking currently fragile but transformative: pivotools complained that many inference providers and open-weight models respond without actually doing interleaved reasoning
Ranking Every Sam Raimi Movie Part 3 - re:View
#rlm #redlettermedia #samraimi

Best of the Worst: Wheel of the Worst #31
#rlm #redlettermedia #bestoftheworst #wheeloftheworst

Ranking Every Sam Raimi Movie Part 2 - re:View
#rlm #redlettermedia #samraimi

Awni Hannun (@awnihannun)
'Recursive LM' 논문 관련 추가 논의에서 핵심은 프롬프트를 세분화해 하위 LLM들이 각각 부분 작업을 수행하고 결과를 통합하는 재귀적 구조라는 점이 강조됩니다. 또한 LLM에 REPL을 제공하여 코드 실행과 같은 상호작용을 가능하게 하는 부분이 주요한 혁신 포인트로 언급됩니다.

Some replies along the lines of the key idea is breaking down the prompt and recursively running sub LLMs on it and stitching them back together. I understand that’s the central premise of the paper. But the lasting nugget is giving the LLM a REPL (which may not even be novel in
Awni Hannun (@awnihannun)
새 논문 'Recursive LM (RLM)'이 화제입니다. 핵심 아이디어는 언어모델(LM)에 REPL 환경을 제공한다는 점으로, 이를 통해 모델이 코드를 실행하면서 문제를 단계적으로 해결할 수 있습니다. 논문은 장문 문맥 처리 문제를 해결한다고 알려졌으나, 실제로는 LM과 REPL의 결합이 가장 흥미로운 혁신으로 평가됩니다.

Looking at the recursive LM (RLM) paper this morning. It's actually quite simple and nice idea: give the LM a REPL. The paper is marketed as solving long-context. But I think the key nugget is to give the LM a REPL. The REPL is useful because: - Execute code in it -> let's you
Untitled Movie Genre Themed Trivia Related Program (Alternate Edit)
#rlm #redlettermedia

Trivia of Terror
#rlm #redlettermedia
