Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code
https://ai.georgeliu.com/p/running-google-gemma-4-locally-with
Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code
https://ai.georgeliu.com/p/running-google-gemma-4-locally-with
Is it not about the same as using OpenCode?
And is running a local model with Claude Code actually usable for any practical work compared to the hosted Anthropic models?
I don't think there is any incentive to do so right now because the open models aren't as good. The vast majority of businesses are going to just pay the extra cost for access to a frontier model. The model is what gives them a competitive advantage, not the harness. The harness is a lot easier to replicate than Opus.
There are benefits too. Some developers might learn to use Claude Code outside of work with cheaper models and then advocate for using Claude Code at work (where their companies will just buy access from Anthropic, Bedrock, etc). Similar to how free ESXi licenses for personal use helped infrastructure folks gain skills with that product which created a healthy supply of labor and VMware evangelists that were eager to spread the gospel. Anthropic can't just give away access to Claude models because of cost so there is use in allowing alternative ways for developers to learn how to use Claude Code and develop a workflow with it.