Apple now sells refurbished M5 MacBook Pro, iPad 11, and M4 iPad Pro

Apple’s refurbished store now spans M5 MacBook Pro, iPad 11, and M4 iPad Pro, all backed by a full one‑year warranty and AppleCare+ eligibility.

https://gadgetbond.com/apple-m5-macbook-pro-m4-ipad-pro-ipad-11-refurbished-availability/

Alex Cheema (@alexocheema)

arcee_ai 모델이 Apple Silicon RDMA 클러스터와 exolabs에서 잘 동작한다고 소개했다. 이 모델은 398B 규모에 활성 파라미터는 13B로 매우 희소해 Apple Silicon에 적합하며, 전체 모델은 약 800GB라 4대의 256GB Mac Studio가 필요하다. 6-bit로도 구동 가능하다고 언급했다.

https://x.com/alexocheema/status/2040574746933305589

#llm #applesilicon #sparsemodel #inference #macstudio

Alex Cheema (@alexocheema) on X

Yes, @arcee_ai should run well on Apple Silicon RDMA clusters with @exolabs. It’s a 398B model, 13B active parameters (very sparse so great for Apple Silicon). It’s natively 16-bit at ~800GB so you’d need 4 x 256GB Mac Studios to run the full model. You can run it in 6-bit on

X (formerly Twitter)

Benchmarking Gemma 4 (e4b): Linux vs. Mac 🚀 .

I tested the e4b gemma 4 variant on a 32GB Linux setup vs. a 16GB Mac.
The Mac was 4.5x faster (44s vs 199s) and nailed a complex poem constraint.
Find more details about _why_ Linux results were different on
https://www.lotharschulz.info/2026/04/04/gemma-4-performance-showdown-linux-vs-mac-benchmarks/
Also, experimented with Ollama MLX preview support using a qwen model.

#Gemma4 #AI #LocalAI #Linux #AppleSilicon

TinyGPU : des pilotes validés par Apple pour brancher des cartes graphiques externes sur les Mac Apple Silicon http://dlvr.it/TRrztL #TinyGPU #AppleSilicon
TinyGPU : des pilotes validés par Apple pour brancher des cartes graphiques externes sur les Mac Apple Silicon

Les bricoleurs de chez TinyCorp continuent leurs bidouilles pour accélérer les calculs d’IA sur Mac. Après avoir développé un système pour connecter une carte RTX à un Mac Apple Silicon, ils viennent d’annoncer qu’Apple avait validé leur pilote pour ...

MacGeneration

Ollama가 Apple의 ML 프레임워크 MLX 기반으로 Apple Silicon(M5/M5 Pro/M5 Max)에서 미리보기로 가속됩니다. Qwen3.5-35B-A3B에서 prefill·decode 속도 크게 향상되고 NVFP4 양자화로 생산 환경과 동등한 품질 유지가 가능해졌습니다. 캐시 재사용·스마트 체크포인트·스마트 삭제로 응답성·메모리 효율 개선. Ollama 0.19 공개(통합메모리 32GB 권장).

https://ollama.com/blog/mlx

#applesilicon #mlx #nvfp4 #localllm #performance

Ollama is now powered by MLX on Apple Silicon in preview · Ollama Blog

Today, we're previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple's machine learning framework.

Ollama 0.19, MLX 탑재로 Mac에서 AI 추론 속도 2배 빨라졌다

Ollama 0.19가 Apple MLX 프레임워크를 탑재해 Mac에서 AI 추론 속도를 최대 2배 향상. NVFP4 지원과 캐시 개선도 포함한 주요 업데이트를 소개합니다.

https://aisparkup.com/posts/10740

You probably know all about Apple's big deal gadgets, but as it turns 50, you mightn't recollect some of the industry's most useful ideas, too. https://www.pickr.com.au/news/2026/5-curiously-clever-apple-ideas-worth-recalling-at-50 #apple #computers #news #phones #aipc #aluminium #anniversaries #apple #applesilicon #ipad #iphone #macbook #macbookair #magnets #magsafe #typec

🛠️ Ollama: Native MLX Backend for Apple Silicon

Ollama now runs on Apple MLX natively. On M5 Max + Qwen3.5-35B-A3B: 1851 tok/s prefill, 134 tok/s decode. Also adds NVFP4 quantization for production parity with NVIDIA inference and improved KV cache reuse for agentic workloads.

solomonneas.dev/intel

#Ollama #LLM #AppleSilicon #DevTools

Ollama is now updated to run the fastest on Apple silicon, powered by MLX, Apple's machine learning framework.

This change unlocks much faster performance to accelerate demanding work on macOS:

- Personal assistants like OpenClaw
- Coding agents like Claude Code, OpenCode, or Codex

https://x.com/ollama/status/2038835449012351197

#ollama #applesilicon #mlx #macos #ai

ollama (@ollama) on X

Ollama is now updated to run the fastest on Apple silicon, powered by MLX, Apple's machine learning framework. This change unlocks much faster performance to accelerate demanding work on macOS: - Personal assistants like OpenClaw - Coding agents like Claude Code, OpenCode,

X (formerly Twitter)