Ollama is now powered by MLX on Apple Silicon in preview

https://ollama.com/blog/mlx

Ollama is now powered by MLX on Apple Silicon in preview · Ollama Blog

Today, we're previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple's machine learning framework.

LLMs on device is the future. It's more secure and solves the problem of too much demand for inference compared to data center supply, it also would use less electricity. It's just a matter of getting the performance good enough. Most users don't need frontier model performance.
"Most users don't need frontier model performance" unfortunately, this is not the case.
... another user who "don't need frontier model performance" downvoted LOL ppl why are you so predictable? No wonder you are being replaced by LLMs ...
Complaining about downvotes is futile and is also against hn guidelines.