Probably going to swap out Ollama for llama-swap in my local #LLM stuff. Getting a tad cheesed at Ollama, supposedly performance on stock llama.cpp server (which llama-swap Docker contains by default) might be better, we'll see.

#LLM

Things I principally value(d) Ollama for -- multiple models and a model timeout -- are apparently easily done in llama-swap -- thing that remains for me is whether or not my AMD GPU (it sucks, but I would like it to work) will go with it. Ollama just throws in the towel and includes an entire AMD ROCm in their own distrib, which works, but I've had Problems with system-wide stock AMD GPU support. We'll see.

#LLM

somewhat hilarious as AMD is *supposed* to be in the linux kernel! Which is sort of is, I guess. Also got my eye on that apparently reasonably priced Intel GPU (32GB VRAM for ~$1000) that's coming out at some point; llama.cpp amply supports Intel's GPU but afaik Ollama does not. Anyway.

#LLM

@adr It's amazing how bad the ROCm experience still is. I remember back in 2023 investors thought AMD would have it fixed in a year or two, and I was just like I've been waiting my entire life for ATI and now AMD to get their software ducks in a row. We are still waiting.
@dvshkn Yeah it's... yeah. it's grotty. :/
@adr tbf it sounds like the linux desktop and gaming experience is pretty lovely with AMD now because Valve did AMD's homework for them
@dvshkn Oh yeah! I have a Steam Deck and everything works *just fine* on that!