One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...

https://mastodon.thi.ng/@toxi/110942967462856117

#ThingUmbrella #DataViz #REPL #Terminal

Whistler: Live eBPF Programming from the Common Lisp REPL

Writing, compiling, loading, and querying eBPF programs in one Lisp form.

REPL Yell!
🌗 GitHub - gcv/julia-snail:Emacs 的 Julia 開發環境
➤ 為 Emacs 使用者打造的高效 Julia 交互式開發環境
https://github.com/gcv/julia-snail
Julia Snail 是一套專為 Emacs 設計的 Julia 開發與 REPL 互動套件。其開發理念致敬了 Common Lisp 的 SLIME 與 Clojure 的 CIDER,旨在為 Julia 提供高效、靈活的交互式開發體驗。Snail 利用高效能終端模擬器(vterm 或 Eat)來呈現原生 REPL,有效解決了傳統 Emacs 緩衝區顯示異常的問題。透過與 Emacs 原生 xref 系統、自動補全機制及 CSTParser 的深度整合,開發者能更直觀地執行程式碼載入、定義跳轉及符號補全,實現無縫的開發工作流。
+ 終於有像 CIDER 那樣的工具了!對於習慣在 Emacs 裡寫 Clojure 的人來說,Julia Snail 讓切換語言變得輕鬆不少。
+ 雖然安裝 v
#Emacs #Julia #軟體開發 #開源工具 #REPL
GitHub - gcv/julia-snail: An Emacs development environment for Julia

An Emacs development environment for Julia. Contribute to gcv/julia-snail development by creating an account on GitHub.

GitHub

https://lispy-gopher-show.itch.io/leonardo-calculus/devlog/1451887/my-ansi-common-lisp-condition-ontology-eg
#devlog (!) #commonLisp #programming #article on my #itchio .

Since I forgot I had to ping aral to let my blog webhook work again, here is this article exhibiting

The ANSI common lisp condition system

in particular, I wrote an #ontology for classical! #lisp expression generation supporting precondition .. postcondition = handler .. local restart

interactive #repl exploration. The blog will resume its webhook updates next week. I should love this itch already anyway.

My ANSI common lisp condition ontology eg - Leonardo Calculus Software Individuals by screwtape

Fallow week oh fallow week. I mean, the show with cdg, kmp, and Ramin Honary then Kent Pitman’s epic ensuing mastodon thread featuring Pitman, Ramin, Gosling, Doug Merritt, and Roger Crew and others...

itch.io

@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺

https://codeberg.org/spritely/hoot-repl/pulls/4

Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺

#wasm #programming

Optimize load time

This change parallelizes all resource loading. Before this change, all resources were loaded by the browser sequentially:First reflect.js and repl.js. After those are finished, wtf8.wasm (triggered from repl.js, initiated from reflect.js). Then repl.wasm (same). And finally (after a few ms pro...

Codeberg.org
@lobsters Lets not forget about debugging - currently it's either print log (crap) or a full trace (heavyweight). Im addressing this in #golang with a #TUI debugger with time travel and a #REPL https://asyncmachine.dev/wasm but that still doesnt replace actual breakpoints
asyncmachine.dev

The new release of #asyncmachine brings #WASM support and browser compat - including #aRPC, #TUI debugging and #REPL. Check out the changelog for v0.18.0 https://github.com/pancsta/asyncmachine-go/releases/tag/v0.18.0 and a dedicated WASM example https://asyncmachine.dev/wasm (with D2 diagrams).

This allows to have a single #statemachine #distributed across n servers and m browsers, using efficient diff-based synchronization.

Looking fwd to #wasmio!

#golang #workflows #rpc #webassembly #d2 #go #wasmio2026

Awni Hannun (@awnihannun)

mlx_lm.server를 사용해 로컬에서 실행 가능한 LM(REPL 포함) 예제를 공개했다. Qwen3 Coder Next 모델을 M3 Ultra에서 구동하며, 모델이 연산을 요청하면 코드 생성 → 실행 → 결과 반환 과정을 반복하는 구조를 보여줬다. 이는 모델의 로컬 실행 및 코드 생성 테스트에 유용한 간단한 사례로, 개발자들이 오프라인 환경에서도 실험할 수 있도록 돕는다.

https://x.com/awnihannun/status/2025241088148300074

#mlx #repl #localai #qwen3 #developertools

Awni Hannun (@awnihannun) on X

Made a simple example of an LM with a REPL using mlx_lm.server to run locally (using Qwen3 Coder Next on an M3 Ultra). 1. Ask model to compute something (here 1000th fibonacci number). 2. Model generates the code 3. Run the code in the REPL 4. Get result and return to step 2 if

X (formerly Twitter)

Awni Hannun (@awnihannun)

'Recursive LM' 논문 관련 추가 논의에서 핵심은 프롬프트를 세분화해 하위 LLM들이 각각 부분 작업을 수행하고 결과를 통합하는 재귀적 구조라는 점이 강조됩니다. 또한 LLM에 REPL을 제공하여 코드 실행과 같은 상호작용을 가능하게 하는 부분이 주요한 혁신 포인트로 언급됩니다.

https://x.com/awnihannun/status/2025299976918893053

#llm #repl #rlm #languagemodel #research

Awni Hannun (@awnihannun) on X

Some replies along the lines of the key idea is breaking down the prompt and recursively running sub LLMs on it and stitching them back together. I understand that’s the central premise of the paper. But the lasting nugget is giving the LLM a REPL (which may not even be novel in

X (formerly Twitter)

Awni Hannun (@awnihannun)

새 논문 'Recursive LM (RLM)'이 화제입니다. 핵심 아이디어는 언어모델(LM)에 REPL 환경을 제공한다는 점으로, 이를 통해 모델이 코드를 실행하면서 문제를 단계적으로 해결할 수 있습니다. 논문은 장문 문맥 처리 문제를 해결한다고 알려졌으나, 실제로는 LM과 REPL의 결합이 가장 흥미로운 혁신으로 평가됩니다.

https://x.com/awnihannun/status/2025227360946237460

#research #llm #repl #rlm #nlp

Awni Hannun (@awnihannun) on X

Looking at the recursive LM (RLM) paper this morning. It's actually quite simple and nice idea: give the LM a REPL. The paper is marketed as solving long-context. But I think the key nugget is to give the LM a REPL. The REPL is useful because: - Execute code in it -> let's you

X (formerly Twitter)