I've seen the light of MCP. Well, not the protocol itself. My understanding is it is pretty janky, and I don't need to be an expert to see the context injection threat it represents.

But I have Claude desktop rigged with local memory, filesystem, shell tools, and a behavioral correction rule system, and it is pretty slick! Next I want to try it with Ollama, although I doubt any model I can run locally will handle the context overhead.

#AI #mcp #llm_agent #local_llm

@TheServitor I tried local LLMs with MCP and they suck. This doc from cline explains it well

https://docs.cline.bot/running-models-locally/read-me-first

Read Me First - Cline

Cline

@samuraihack

well dang. I was afraid of that. I've used local models for generation but not tool calls and was skeptical it would work. I was more worried about context window than inference accuracy, but that's worse.

Cline (@cline.bot) on Threads

For a long time, local models were basically unusable in Cline. But these models are getting better & smaller 👀 model: lmstudio-community/Qwen3-30B-A3B-GGUF (3-bit, 14.58 GB) hardware: MacBook Pro (M4 Max, 36GB RAM) (run via lmstudio)

Threads