The switch from #claudecode to Open-Source tools and models works fine for me.
1. The CLI client I use is llxprt . This is a fork of the #gemini cli .
https://github.com/acoliver/llxprt-code
The Desktop client is #anythingllm
https://anythingllm.com/
Both have MCP support, and work for macOS and Linux.
2. For large code bases I am using ck search in my prompts and make llxpert to use it via Bash tool calls.
https://github.com/BeaconBay/ck
My experiences with claude-context are that it's expensive and not effective for the cost (embeddings):
https://github.com/zilliztech/claude-context
I stopped using it.
3. I mostly use #OpenRouter
gpt-oss 120b as a coding model
kimi-k2-0905 or glm-4.5 for select tasks, like finding a function or simple test output summaries.
These models are on par with #sonnet or #opus
https://artificialanalysis.ai/models/glm-4.5?model-filters=open-source
4. I mostly optimize my prompts, tools, and workflows.
I write my own MCP tools, and use specific RAG approaches to establish the context.
5. I self-host more and more. But I haven't found my stack here yet. I actually want gpt-oss, but it's too expensive.
6. #openai codex cli is a myth to me.
https://github.com/openai/codex
Yes, it's slow. But in numerous instances the output is great. It can be used with the same OpenRouter models.