Running Gemma 4 locally with LM Studio's new headless CLI and Claude Code

https://ai.georgeliu.com/p/running-google-gemma-4-locally-with

Running Google Gemma 4 Locally With LM Studio’s New Headless CLI & Claude Code

LM Studio 0.4.0 introduced llmster and the lms CLI. Here is how I set up Gemma 4 26B for local inference on macOS that can be used with Claude Code.

George Liu

ollama launch claude --model gemma4:26b

It's amazing how simple this is, and it just works if you have ollama and claude installed!
For some reason, that doesn't work for me, claude never returns from some ill loop. Nemotron, glm and qwen 3.5 work just fine, gemma - doesn't.

Since that defaults to the q4 variant, try the q8 one:

ollama launch claude --model gemma4:26b-a4b-it-q8_0