Gemma 4 is very impressive. I tested it on my home computer that has no GPU, only an i5 CPU, and 64 GB of RAM.
Running was simple:
ollama run --model gemma4 --think=false
It's not instant, but it's very fast, and it gives solid answers. I tried running LLMs on my CPU in 2024 and had slow, unusable garbage. Hopeful that local is the future.