Ollama just added MLX support for Apple Silicon chips. This is huge for anyone wanting to run AI models locally.
MLX is Apple's machine learning framework that's optimized for M-series chips. Your MacBook can now run models like Llama 3 faster while using less power.
