🤔 So you want to *think* about GPUs? Good luck diving through 12 parts of pseudo-instructions that are all about #TPUs while pretending #GPUs matter. 🚀 Spoiler alert: They're basically fancy math rocks, but hey, at least they're not TPUs! ✨
https://jax-ml.github.io/scaling-book/gpus/ #technology #computing #mathrocks #hackernews #HackerNews #ngated
How to Think About GPUs | How To Scale Your Model

We love TPUs at Google, but GPUs are great too. This chapter takes a deep dive into the world of GPUs – how each chip works, how they’re networked together, and what that means for LLMs, especially compared to TPUs. While there are a multitude of GPU architectures from NVIDIA, AMD, Intel, and others, here we will focus on NVIDIA GPUs. This section builds on <a href='https://jax-ml.github.io/scaling-book/tpus/'>Chapter 2</a> and <a href='https://jax-ml.github.io/scaling-book/training'>Chapter 5</a>, so you are encouraged to read them first.