free models on openrouter.ai appear to be pretty much unusable... that was a waste of time.

... and i think my ollama is borked due to nvidia container stuff not doing its thing?!? fixing nvidia drivers is pretty far down the list of things i want to do :\

#BlaBlaBla

good news :D nvidia was fine, wrong docker compose config file was running for some reason ¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

gave qwen3:8b a run using opencode.ai but it seems to just crash and freak out on any simple command (eg /init)

left it to download 30b model overnight... pretty sure that will have a bad time/fail on my old gpu though.

maybe there is some other issue with my ollama server running tools (⁠^⁠~⁠^⁠;⁠)⁠ゞ

okay... so it's not actually possible to use opencode.ai with ollama?!?

tool calling appears to be borked?
- https://github.com/sst/opencode/issues/1034
- https://github.com/sst/opencode/issues/3122

waste of time ++

Local Ollama tool calling either not calling or failing outright · Issue #1034 · sst/opencode

Hi, I'm trying a local Ollama model that supports tool calls (qwen3:32b). When I ask for example "What networks am I connected to?" the model just thinks about which tool to use to fulfill the requ...

GitHub

as per https://github.com/sst/opencode/issues/2362#issuecomment-3247382727

i've set the env var ollama server:

OLLAMA_CONTEXT_LENGTH=12288

and https://github.com/sst/opencode/issues/2362#issuecomment-3243343525

ive set the model config in opencode.json to:

"qwen3-coder:30b": {
"name": "qwen3-coder:30b",
"tools": true,
"tool_call": true
}

the robot has now successfully created an empty file... server currently sounds like a 747 trying to run `/init` and we about 15mins in.

ok... it has completely spazed out and created a couple a routes rather then update the AGENTS.md as it was instructed to do.... im going to kill it before it wipes my HD or does something even stupider.