One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
https://lispy-gopher-show.itch.io/leonardo-calculus/devlog/1451887/my-ansi-common-lisp-condition-ontology-eg
#devlog (!) #commonLisp #programming #article on my #itchio .
Since I forgot I had to ping aral to let my blog webhook work again, here is this article exhibiting
The ANSI common lisp condition system
in particular, I wrote an #ontology for classical! #lisp expression generation supporting precondition .. postcondition = handler .. local restart
interactive #repl exploration. The blog will resume its webhook updates next week. I should love this itch already anyway.

Fallow week oh fallow week. I mean, the show with cdg, kmp, and Ramin Honary then Kent Pitman’s epic ensuing mastodon thread featuring Pitman, Ramin, Gosling, Doug Merritt, and Roger Crew and others...
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺
https://codeberg.org/spritely/hoot-repl/pulls/4
Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺

This change parallelizes all resource loading. Before this change, all resources were loaded by the browser sequentially:First reflect.js and repl.js. After those are finished, wtf8.wasm (triggered from repl.js, initiated from reflect.js). Then repl.wasm (same). And finally (after a few ms pro...
The new release of #asyncmachine brings #WASM support and browser compat - including #aRPC, #TUI debugging and #REPL. Check out the changelog for v0.18.0 https://github.com/pancsta/asyncmachine-go/releases/tag/v0.18.0 and a dedicated WASM example https://asyncmachine.dev/wasm (with D2 diagrams).
This allows to have a single #statemachine #distributed across n servers and m browsers, using efficient diff-based synchronization.
Looking fwd to #wasmio!
Awni Hannun (@awnihannun)
mlx_lm.server를 사용해 로컬에서 실행 가능한 LM(REPL 포함) 예제를 공개했다. Qwen3 Coder Next 모델을 M3 Ultra에서 구동하며, 모델이 연산을 요청하면 코드 생성 → 실행 → 결과 반환 과정을 반복하는 구조를 보여줬다. 이는 모델의 로컬 실행 및 코드 생성 테스트에 유용한 간단한 사례로, 개발자들이 오프라인 환경에서도 실험할 수 있도록 돕는다.

Made a simple example of an LM with a REPL using mlx_lm.server to run locally (using Qwen3 Coder Next on an M3 Ultra). 1. Ask model to compute something (here 1000th fibonacci number). 2. Model generates the code 3. Run the code in the REPL 4. Get result and return to step 2 if
Awni Hannun (@awnihannun)
'Recursive LM' 논문 관련 추가 논의에서 핵심은 프롬프트를 세분화해 하위 LLM들이 각각 부분 작업을 수행하고 결과를 통합하는 재귀적 구조라는 점이 강조됩니다. 또한 LLM에 REPL을 제공하여 코드 실행과 같은 상호작용을 가능하게 하는 부분이 주요한 혁신 포인트로 언급됩니다.

Some replies along the lines of the key idea is breaking down the prompt and recursively running sub LLMs on it and stitching them back together. I understand that’s the central premise of the paper. But the lasting nugget is giving the LLM a REPL (which may not even be novel in
Awni Hannun (@awnihannun)
새 논문 'Recursive LM (RLM)'이 화제입니다. 핵심 아이디어는 언어모델(LM)에 REPL 환경을 제공한다는 점으로, 이를 통해 모델이 코드를 실행하면서 문제를 단계적으로 해결할 수 있습니다. 논문은 장문 문맥 처리 문제를 해결한다고 알려졌으나, 실제로는 LM과 REPL의 결합이 가장 흥미로운 혁신으로 평가됩니다.

Looking at the recursive LM (RLM) paper this morning. It's actually quite simple and nice idea: give the LM a REPL. The paper is marketed as solving long-context. But I think the key nugget is to give the LM a REPL. The REPL is useful because: - Execute code in it -> let's you