#AskFedi #programming

I've been trying to figure out local LLM stuff since it seems employers are looking for AI-capable people and I should at least see what's up, but I really don't trust cloud models.

Anyone have good success with local #AI #Ollama models for #code (#Zed) for a 12GB GPU? All the models I've tried so far are either quick but use tools incorrectly, or don't fit on the GPU and are painfully slow.

@Charlie best to have a 16gb vram gpu. Have used llama.cpp quite a bit. Hugging face also good if you want more control and API access for inference/training.