Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
https://lispy-gopher-show.itch.io/leonardo-calculus/devlog/1451887/my-ansi-common-lisp-condition-ontology-eg
#devlog (!) #commonLisp #programming #article on my #itchio .
Since I forgot I had to ping aral to let my blog webhook work again, here is this article exhibiting
The ANSI common lisp condition system
in particular, I wrote an #ontology for classical! #lisp expression generation supporting precondition .. postcondition = handler .. local restart
interactive #repl exploration. The blog will resume its webhook updates next week. I should love this itch already anyway.

Fallow week oh fallow week. I mean, the show with cdg, kmp, and Ramin Honary then Kent Pitmanโs epic ensuing mastodon thread featuring Pitman, Ramin, Gosling, Doug Merritt, and Roger Crew and others...
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% โบ
https://codeberg.org/spritely/hoot-repl/pulls/4
Though Iโm sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents โบ

This change parallelizes all resource loading. Before this change, all resources were loaded by the browser sequentially:First reflect.js and repl.js. After those are finished, wtf8.wasm (triggered from repl.js, initiated from reflect.js). Then repl.wasm (same). And finally (after a few ms pro...
The new release of #asyncmachine brings #WASM support and browser compat - including #aRPC, #TUI debugging and #REPL. Check out the changelog for v0.18.0 https://github.com/pancsta/asyncmachine-go/releases/tag/v0.18.0 and a dedicated WASM example https://asyncmachine.dev/wasm (with D2 diagrams).
This allows to have a single #statemachine #distributed across n servers and m browsers, using efficient diff-based synchronization.
Looking fwd to #wasmio!
Awni Hannun (@awnihannun)
mlx_lm.server๋ฅผ ์ฌ์ฉํด ๋ก์ปฌ์์ ์คํ ๊ฐ๋ฅํ LM(REPL ํฌํจ) ์์ ๋ฅผ ๊ณต๊ฐํ๋ค. Qwen3 Coder Next ๋ชจ๋ธ์ M3 Ultra์์ ๊ตฌ๋ํ๋ฉฐ, ๋ชจ๋ธ์ด ์ฐ์ฐ์ ์์ฒญํ๋ฉด ์ฝ๋ ์์ฑ โ ์คํ โ ๊ฒฐ๊ณผ ๋ฐํ ๊ณผ์ ์ ๋ฐ๋ณตํ๋ ๊ตฌ์กฐ๋ฅผ ๋ณด์ฌ์คฌ๋ค. ์ด๋ ๋ชจ๋ธ์ ๋ก์ปฌ ์คํ ๋ฐ ์ฝ๋ ์์ฑ ํ ์คํธ์ ์ ์ฉํ ๊ฐ๋จํ ์ฌ๋ก๋ก, ๊ฐ๋ฐ์๋ค์ด ์คํ๋ผ์ธ ํ๊ฒฝ์์๋ ์คํํ ์ ์๋๋ก ๋๋๋ค.

Made a simple example of an LM with a REPL using mlx_lm.server to run locally (using Qwen3 Coder Next on an M3 Ultra). 1. Ask model to compute something (here 1000th fibonacci number). 2. Model generates the code 3. Run the code in the REPL 4. Get result and return to step 2 if
Awni Hannun (@awnihannun)
'Recursive LM' ๋ ผ๋ฌธ ๊ด๋ จ ์ถ๊ฐ ๋ ผ์์์ ํต์ฌ์ ํ๋กฌํํธ๋ฅผ ์ธ๋ถํํด ํ์ LLM๋ค์ด ๊ฐ๊ฐ ๋ถ๋ถ ์์ ์ ์ํํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ํตํฉํ๋ ์ฌ๊ท์ ๊ตฌ์กฐ๋ผ๋ ์ ์ด ๊ฐ์กฐ๋ฉ๋๋ค. ๋ํ LLM์ REPL์ ์ ๊ณตํ์ฌ ์ฝ๋ ์คํ๊ณผ ๊ฐ์ ์ํธ์์ฉ์ ๊ฐ๋ฅํ๊ฒ ํ๋ ๋ถ๋ถ์ด ์ฃผ์ํ ํ์ ํฌ์ธํธ๋ก ์ธ๊ธ๋ฉ๋๋ค.

Some replies along the lines of the key idea is breaking down the prompt and recursively running sub LLMs on it and stitching them back together. I understand thatโs the central premise of the paper. But the lasting nugget is giving the LLM a REPL (which may not even be novel in
Awni Hannun (@awnihannun)
์ ๋ ผ๋ฌธ 'Recursive LM (RLM)'์ด ํ์ ์ ๋๋ค. ํต์ฌ ์์ด๋์ด๋ ์ธ์ด๋ชจ๋ธ(LM)์ REPL ํ๊ฒฝ์ ์ ๊ณตํ๋ค๋ ์ ์ผ๋ก, ์ด๋ฅผ ํตํด ๋ชจ๋ธ์ด ์ฝ๋๋ฅผ ์คํํ๋ฉด์ ๋ฌธ์ ๋ฅผ ๋จ๊ณ์ ์ผ๋ก ํด๊ฒฐํ ์ ์์ต๋๋ค. ๋ ผ๋ฌธ์ ์ฅ๋ฌธ ๋ฌธ๋งฅ ์ฒ๋ฆฌ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ค๊ณ ์๋ ค์ก์ผ๋, ์ค์ ๋ก๋ LM๊ณผ REPL์ ๊ฒฐํฉ์ด ๊ฐ์ฅ ํฅ๋ฏธ๋ก์ด ํ์ ์ผ๋ก ํ๊ฐ๋ฉ๋๋ค.

Looking at the recursive LM (RLM) paper this morning. It's actually quite simple and nice idea: give the LM a REPL. The paper is marketed as solving long-context. But I think the key nugget is to give the LM a REPL. The REPL is useful because: - Execute code in it -> let's you
Awni Hannun (@awnihannun)
mlx_lm.server๋ฅผ ์ฌ์ฉํด ๋ก์ปฌ์์ ์คํ๋๋ ์ธ์ด ๋ชจ๋ธ(REPL ๊ธฐ๋ฐ) ์์๋ฅผ ๊ณต์ ํ์ต๋๋ค. Qwen3 Coder Next ๋ชจ๋ธ์ M3 Ultra ์นฉ์์ ๊ตฌ๋ํ๋ฉฐ, ๋ชจ๋ธ์ด ์ฝ๋๋ฅผ ์์ฑํ๊ณ REPL์์ ์คํํ ํ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํํ๋ ๊ณผ์ ์ ์์ฐํ์ต๋๋ค. ์ด๋ ๋ก์ปฌ ํ๊ฒฝ์์ LM ๊ธฐ๋ฅ์ ์คํํ๊ณ ์ ํ๋ ๊ฐ๋ฐ์๋ค์๊ฒ ์ ์ฉํ ์์๋ก ๋ณด์ ๋๋ค.

Made a simple example of an LM with a REPL using mlx_lm.server to run locally (using Qwen3 Coder Next on an M3 Ultra). 1. Ask model to compute something (here 1000th fibonacci number). 2. Model generates the code 3. Run the code in the REPL 4. Get result and return to step 2 if