April 2026 TLDR setup for Ollama + Gemma 4 26B on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive

https://gist.github.com/greenstevester/fc49b4e60a4fef9effc79066c1033ae5

April 2026 TLDR setup for Ollama + Gemma 4 26B on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive

April 2026 TLDR setup for Ollama + Gemma 4 26B on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive - how-to-setup-ollama-on-a-macmini.md

Gist
@jcrabapple are you using it?
@jcrabapple Do you do something similar using Ollama?
@dcpatton I run Ollama on my desktop with an AMD GPU. It's old so I can't run big models, but small ones work ok. Local AI models are getting better and more efficient.
@jcrabapple Have you blogged about your setup? I assume it is a linux desktop.
@dcpatton uh I have my hardware listed on my Uses page on omg.lol. But yeah it's a Aurora (Fedora) Linux desktop with a Ryzen 7 3800X, 32GB RAM, and a Radeon RX 5700 XT.