My current local dev setup: running qwen3.5:35b on Ollama. OpenCode config: choose "remote Ollama" provider (important!! don't use OpenAI compatible provider). Then select qwen3.5:35b from your list of available remote Ollama models. (Optional: I run Ollama on a 32G Mac mini that is stored away running headless out of sight in a closet, and I use my new MacBook Air to work on via TailScale from any location.)
I never dreamed I would have such a good setup on my own inexpensive hardware.