#AskFedi #programming

I've been trying to figure out local LLM stuff since it seems employers are looking for AI-capable people and I should at least see what's up, but I really don't trust cloud models.

Anyone have good success with local #AI #Ollama models for #code (#Zed) for a 12GB GPU? All the models I've tried so far are either quick but use tools incorrectly, or don't fit on the GPU and are painfully slow.

@Charlie I just use openrouter.ai

It's all the AI under one roof. Easy and cheap..

Why don't you 'trust' providers? I don't think there's any issues for personal, recreational use??

@ajit_456

I don't trust the AI companies for the same reason I have a hard time trusting Google and Meta, they profit off of vacuuming up as much data as possible.

Some say they don't train off of my data, but I have no way to know if they are telling the truth.

Local AI is still a black box, but I could actually monitor it if I really cared.

But I am using it on an open source project anyways. So you're right it doesn't _really_ matter. They could train off of my stuff either way.

@Charlie "Some say they don't train off of my data"

It'd be a concern if you;re working with sensitive data.. Personally I wouldn't care otherwise :)