I think I've found a happy medium with LLMs. I got ollama working in my WSL instance of Ubuntu, using CUDA with the Quadro in my laptop. I grabbed a few FOSS models of various sizes, almost all coding focused.

I then wired up VSCode in Windows with the Continue extension talking to the LLMs in my WSL Ubuntu instance. I can now properly dialog with the LLMs, get autocomplete assistance, and brainstorm solutions fully local.

Which also allows me to use proprietary company code and data without risk to provide more accurate suggestions or help me do data validation.

#ollama #llm #vscode #coding #ai

I'm still not fully comfortable with using tools like this. I'm trying to do it in the most mindful/ethical way possible though. Only using models that are fully open source and running everything locally will allow me to hopefully be a good steward of the data I'm using, protect my company from leaks, and let me take advantage of some of the rudimentary stuff it works better for.

I'm primarily using it for the auto-completion so far which has been amazing.

It's also been helpful getting me out of an ADHD rut by generating some boilerplate code for a couple projects to help get my ass off the starting line.