My current local dev setup: running qwen3.5:35b on Ollama. OpenCode config: choose "remote Ollama" provider (important!! don't use OpenAI compatible provider). Then select qwen3.5:35b from your list of available remote Ollama models. (Optional: I run Ollama on a 32G Mac mini that is stored away running headless out of sight in a closet, and I use my new MacBook Air to work on via TailScale from any location.)

I never dreamed I would have such a good setup on my own inexpensive hardware.

@mark_watson How about OLLAMA_CONTEXT_LENGTH?

@veer66 If I am accessing Ollama form a different computer via Tailscale I set both:

OLLAMA_HOST=0.0.0.0 OLLAMA_CONTEXT_LENGTH=32768 ollama serve

If I am running Ollama on the same computer and just want to increase context length I set:

OLLAMA_CONTEXT_LENGTH=32768 ollama serve

Good catch! Coding agents need larger context length.