Ollama is now powered by MLX on Apple Silicon in preview

https://ollama.com/blog/mlx

Ollama is now powered by MLX on Apple Silicon in preview · Ollama Blog

Today, we're previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple's machine learning framework.

How does it compare to some of the newer mlx inference engines like optiq that support turboquantization - https://mlx-optiq.pages.dev/
mlx-optiq — Mixed-Precision Quantization for Apple Silicon

Per-layer sensitivity analysis and TurboQuant KV cache for MLX on Apple Silicon.