I remain very skeptical of the big #AI companies' business models and practices, but I am slowly coming around on some of the #LLM tools.

Charmbracelet Crush (https://github.com/charmbracelet/crush) is legitimately pretty cool, when paired with a local LLM (or LLM provider that isn't hoovering up your data). It's particularly good at really tedious, repetitive one-off crap that isn't worth writing a real general-purpose program for.

It has a very HAL-9000 feel to it, which is probably a good cautionary perspective.

GitHub - charmbracelet/crush: Glamourous agentic coding for all 💘

Glamourous agentic coding for all 💘. Contribute to charmbracelet/crush development by creating an account on GitHub.

GitHub

Followup mini-review of Charmbracelet #Crush after a week or so of using it: definitely one of the better non-proprietary LLM CLI/TUI tools I've used, but the bar is not super high yet.

It's much more visually pleasant to use than #Github #Copilot CLI, but the latter seems to *work* better in general. Perhaps because of tight coupling to the models/tools, but…?

The issue I keep hitting with Crush is that many cheap or self-hostable LLMs have small context sizes, and Crush blows them up very quickly. (I presume the system prompts are very verbose.) It tries to auto-summarize at ~80% but sometimes still bites off more than it can chew and chokes—and then all you can do is dump the session and restart.

Still, I like the concept, and if you squint a bit, you can see how something like this—with a local model—would make a really slick natural-language shell.

It plus speech-to-text would have been great when I couldn't type.