yahoo news | Dell's CEO reckons that AI's hunger for memory will grow by as much as 625 times
Dell’s chief executive has warned that total memory demand from the AI market will explode over the next few years. Citing a Bank of America event, Michael Dell claimed that, as both memory per accelerator and system scale grow together, the overall memory requirement could be **about 625 times larger in 2028 than it was in 2022**. He based this on the Nvidia H100 accelerator, which used 80 GB of HBM3 in 2022 and is expected to reach roughly 2 TB per chip by 2028—a more than 25‑fold increase. Adding an assumed 25‑fold rise in the number of AI accelerators deployed in data centres yields the staggering multiplier.
The implication of such growth is a looming shortage of high‑bandwidth memory (HBM). Only three manufacturers—SK hynix, Samsung, and Micron—currently produce HBM4, and even their combined capacity is unlikely to meet the projected demand. In addition to HBM, other memory categories such as LPDDR5x for laptops and NAND flash for storage are also expected to be stretched thin. A single Nvidia‑based AI server rack can already require hundreds of gigabytes of LPDDR5x and multiple terabytes of SSDs; a fully‑kitted tower might house up to 17 TB of DRAM and 547 TB of flash, and large AI data centres deploy hundreds or thousands of such units.
If manufacturing can keep pace, the cost of memory may remain “affordable” relative to high‑end graphics cards, but the supply‑demand gap threatens to make memory a bottleneck for AI development. The industry will need significant expansion of production facilities and possibly new memory technologies to avoid a crisis. Until then, the predicted 625‑fold surge serves as a stark reminder that the future of AI hinges not just on processing power, but on the availability of massive amounts of fast, high‑capacity memory.
