Followup mini-review of Charmbracelet #Crush after a week or so of using it: definitely one of the better non-proprietary LLM CLI/TUI tools I've used, but the bar is not super high yet.
It's much more visually pleasant to use than #Github #Copilot CLI, but the latter seems to *work* better in general. Perhaps because of tight coupling to the models/tools, but…?
The issue I keep hitting with Crush is that many cheap or self-hostable LLMs have small context sizes, and Crush blows them up very quickly. (I presume the system prompts are very verbose.) It tries to auto-summarize at ~80% but sometimes still bites off more than it can chew and chokes—and then all you can do is dump the session and restart.
Still, I like the concept, and if you squint a bit, you can see how something like this—with a local model—would make a really slick natural-language shell.
It plus speech-to-text would have been great when I couldn't type.