Microsoft just unveiled Maia 200, a 3nm AI inference accelerator designed to undercut Nvidia, Amazon Trainium and Google TPU on performance‑per‑dollar. 🤖
With 10+ PFLOPS FP4, ~5 PFLOPS FP8 and 216GB HBM3e, one Maia 200 node can comfortably run today’s largest models with headroom for bigger ones, while promising ~30% better performance per dollar than Microsoft’s prior hardware.
🔗 https://techglimmer.io/what-is-maia-200-chip-maia-chip/







