I packaged #ollama as a snap (https://snapcraft.io/ollama) for running #localai #llm models.

Going to have a go at packaging some more of these tools.

Install ollama on Linux | Snap Store

Get the latest version of ollama for Linux - Get up and running with large language models, locally.

Snapcraft
Just fixed up CUDA support in revision 10 of the #ollama snap package (`snap refresh ollama` to get it asap if you already installed it and cannot wait), dolphin-mixtral now does 38 tokens/s on my home workstation 🧑‍🔬