I've been trying to figure out local LLM stuff since it seems employers are looking for AI-capable people and I should at least see what's up, but I really don't trust cloud models.
Anyone have good success with local #AI #Ollama models for #code (#Zed) for a 12GB GPU? All the models I've tried so far are either quick but use tools incorrectly, or don't fit on the GPU and are painfully slow.