Followup mini-review of Charmbracelet #Crush after a week or so of using it: definitely one of the better non-proprietary LLM CLI/TUI tools I've used, but the bar is not super high yet.
It's much more visually pleasant to use than #Github #Copilot CLI, but the latter seems to *work* better in general. Perhaps because of tight coupling to the models/tools, but…?
The issue I keep hitting with Crush is that many cheap or self-hostable LLMs have small context sizes, and Crush blows them up very quickly. (I presume the system prompts are very verbose.) It tries to auto-summarize at ~80% but sometimes still bites off more than it can chew and chokes—and then all you can do is dump the session and restart.
Still, I like the concept, and if you squint a bit, you can see how something like this—with a local model—would make a really slick natural-language shell.
It plus speech-to-text would have been great when I couldn't type.
#GitHub users have until April 24 to opt-out of unnecessary #Copilot training data.
Direct link to settings: https://github.com/settings/copilot/features
I would recommend going through and disabling all of the options on the page that you are allowed to disable.
🔥GitHub: We going to train on your data after all
"If a Copilot user has their settings set to enable model training on their interaction data, code snippets from private repositories can be collected and used for model training while the user is actively engaged with Copilot while working in that repository."
https://www.theregister.com/2026/03/26/github_ai_training_policy_changes/