Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code
https://ai.georgeliu.com/p/running-google-gemma-4-locally-with
Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code
https://ai.georgeliu.com/p/running-google-gemma-4-locally-with
lm studio offers an Anthropic compatible local endpoint, so you can point Claude code at it and it'll use your local model for it's requests, however, I've had a lot of problems with LM Studio and Claude code losing it's place. It'll think for awhile, come up with a plan, start to do it and then just halt in the middle. I'll ask it to continue and it'll do a small change and get stuck again.
Using ollama's api doesn't have the same issue, so I've stuck to using ollama for local development work.