#AMD unwraps Instinct #MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of #YottaFLOPS data centers
Achieving a 1000X performance increase in four years is a major achievement, though we should keep in mind that between the Instinct MI300X and Instinct MI500 there is a three-generational instruction set architecture (ISA) gap (#CDNA3 => #CDNA6).
Next-generation #CDNA 6 architecture on-track for 2027.
https://www.tomshardware.com/tech-industry/artificial-intelligence/amd-unwraps-instinct-mi500-boasting-1-000x-more-performance-versus-mi300x-setting-the-stage-for-the-era-of-yottaflops-data-centers
Probably #FP4
AMD unwraps Instinct MI500 boasting 1,000X more performance versus MI300X — setting the stage for the era of YottaFLOPS data centers

Next-generation CDNA 6 architecture on-track for 2027.

Tom's Hardware
AMD punta all'efficienza: obiettivo 30x25 sempre più vicino

AMD annuncia un importante traguardo: l'efficienza dei suoi processori e GPU è aumentata di oltre 28 volte, superando l'obiettivo iniziale del piano 30x25.

CeoTech
#AMD #Instinct #MI300A #APU With #CDNA3 #GPU, #Zen4 #CPU & #UnifiedMemory Offers Up To 4x Speedup Versus Discrete GPUs In #HPC
Since the AMD Instinct MI300A accelerator uses a unified #HBM interface, it eliminates the need for data replication and does not require a #programming distinction between the host and the device memory spaces.
https://wccftech.com/amd-instinct-mi300a-apu-cdna-3-gpu-zen-4-cpu-unified-memory-up-to-4x-speedup-versus-discrete-gpus/
AMD Instinct MI300A APU With CDNA 3 GPU, Zen 4 CPU & Unified Memory Offers Up To 4x Speedup Versus …

AMD's Instinct MI300A APUs deliver a substantial performance improvement in HPC workloads versus traditional discrete GPUs.

Wccftech
#AMD Instinct#MI300 is THE Chance to Chip into #NVIDIA #AI Share
NVIDIA is facing very long lead times for its #H100 and #A100, if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional #GPU, MI300 is GPU-only part. All four center tiles are GPU. With 192GB #HBM, & can simply fit more onto a single GPU than NVIDIA. #MI300A has 24 #Zen4, #CDNA3 GPU cores, and 128GB #HBM3. This is CPU deployed in the El Capitan 2+ Exaflop #supercomputer.
https://www.servethehome.com/amd-instinct-mi300-is-the-chance-to-chip-into-nvidia-ai-share/
AMD Instinct MI300 is THE Chance to Chip into NVIDIA AI Share

The AMD Instinct MI300 is a family of CPU and GPU offerings that embodies a next-generation approach for AI and high-performance computing

ServeTheHome
AMD hat in Servern viel vor. Die Instinct MI300 ist ein riesiger Kombiprozessor mit HBM, Siena kommt in Edge-Server und Genoa-X erhält erneut massig Cache.
HPC-Roadmap: AMDs Riesen-APU mit HBM kommt endlich
HPC-Roadmap: AMDs Riesen-APU mit HBM kommt endlich

AMD hat in Servern viel vor. Die Instinct MI300 ist ein riesiger Kombiprozessor mit HBM, Siena kommt in Edge-Server und Genoa-X erhält erneut massig Cache.

heise online