@jhlagado I got a copy, but I doubt it would ever be useful to me. The Emacs Claude Code system is a far superior interface to the client/server model the in-browser (or Electron-based) interface to the Claude LLM could ever be.
The great irony of #AgenticAI is that it is basically a re-discovery of the old Lisp machines integrated programming environments. The idea of having all of the software in your computing environment fully integrated and controllable with natural language goes way back to Douglas Engelbart “Mother of All Demos,” and Terry Winnograd and the “SHRDLU” system. These were first realized as a commercial product at companies like Symbolics with Genera OS in the late 1970s and early 1980s. Emacs is one of the last surviving integrated programming environments from that era.
The idea that any computing problem can be solved with an app, and the idea of app stores, was always a regression. Apps are a way of maximizing the rent-seeking ability of tech companies. Algorithms and math equations are free, that won’t do at all. So wrap up every algorithm, and every math equation, into a little package you can manipulate on a touch screen and sell them to people. You can even sell the same algorithm exact algorithm a dozen times, each with different parameter configurations, as long as you create a new icon for each one.
But now that AgenticAI is a thing, I think people won’t tolerate the idea of apps anymore. The old mantra “there’s an app for that” may finally be on it’s way out. Who needs apps if you can just ask an AI to make one for you? And who needs an AI to make apps for you if you can just ask it to perform a task and let it choose the right algorithm to accomplish that task?
So that might be the one thing that is actually good about AI: it made people aware of the rent-seeking nature of “apps.” Hopefully, people will go back to the old Lisp Machine model of computing, or “Agentic AI” as they are calling it nowadays. The big difference now is that 1. we have enough computing power to train these massive multi-layer perceptrons that they call “LLMs”, 2. we have the Internet full of digitized information for free* to train the LLM, and 3. everyone has a n Internet-connected computer in their pocket.
(* Of course, anything is free if you just take it without paying.)