NVIDIA Data Center (@NVIDIADC)

MLPerf Inference v6.0 결과가 공개됐으며, NVIDIA Blackwell 기반 시스템이 추론 성능에서 최고 수준의 AI 팩토리 처리량을 달성했다고 소개한다. 최신 AI 인퍼런스 벤치마크와 NVIDIA 하드웨어 성능 우위를 강조한 내용이다.

https://x.com/NVIDIADC/status/2039359226712097227

#mlperf #nvidia #blackwell #inference #benchmark

NVIDIA Data Center (@NVIDIADC) on X

📣 MLPerf Inference v6.0 results are in. Learn how systems powered by NVIDIA Blackwell set the pace on inference, delivering the highest AI factory throughput. 🔗 https://t.co/Abid9w6wx3

X (formerly Twitter)

NVIDIA (@nvidia)

NVIDIA는 MLPerf Inference v6.0에서 극한 수준의 공동 설계를 통해 다양한 모델에서 최고 토큰 출력 성능을 달성했다고 강조했다. AI 팩토리 생산성은 칩 사양보다 실제 성능이 더 중요하다는 메시지다.

https://x.com/nvidia/status/2039419585254875191

#nvidia #mlperf #inference #benchmark #ai

NVIDIA (@nvidia) on X

Delivered performance, not peak chip specifications, drives AI factory productivity. Rigorous benchmarks are the only way to see past the noise. In MLPerf Inference v6.0, NVIDIA extreme co-design delivered the highest token output across the broadest range of models and

X (formerly Twitter)

🚀 NVIDIA’s new NVFP4 training recipe slashes AI model training time and cost, powering Blackwell Ultra GPUs to set new MLPerf Training records on large language models like Llama 3.1. Discover how GPU acceleration is reshaping open‑source AI development. #NVFP4 #BlackwellUltra #MLPerf #Llama3_1

🔗 https://aidailypost.com/news/nvidias-nvfp4-training-recipe-boosts-ai-speed-cuts-costs

Large language models #LLMs are growing extremely quickly, and the #hardware systems that they require can’t keep up with the pace. Each time #MLPerf introduces a new benchmark, training time increases. The data tells the story. spectrum.ieee.org/mlperf-trends
This year's #MLPerf introduced three new benchmark tests (its largest yet, its smallest yet, and a new voice-to-text model), and #Nvidia's Blackwell Ultra topped the charts on the two largest benchmarks.

MLPerf Introduces Largest and ...
Nvidia's Blackwell Ultra Dominates MLPerf Inference

Nvidia's Blackwell Ultra chip is setting new standards in AI performance. How does it achieve nearly 50% performance gain?

IEEE Spectrum

💪 NVIDIA Blackwell Ultra: 5× schneller – wieso?

▶️ Erklärt NVFP4 und TTFT
▶️ Nutzt 288GB HBM3e RAM!
▶️ Trennt Kontext und Gen

#ai #ki #artificialintelligence #BlackwellUltra #Nvidia #mlperf #aiinference #tech2025

💬 LIKEN ❤️ TEILEN 🔄 LESEN 📖 FOLGEN ➕

https://kinews24.de/nvidia-blackwell-ultra-architektur-benchmark-rekorde-2025/

MLCommons released the results of MLPerf v2.0 today. Lightbits participated with amazing results. 🚀 To view all of the benchmark results, go to
https://mlcommons.org/benchmarks/storage/
#mlperf #ai #softwaredefinedstorage
Benchmark MLPerf Storage | MLCommons V1.1 Results

The MLPerf Storage benchmark suite measures how fast storage systems can supply training data when a model is being trained. Below is a short summary of the workloads and metrics from the latest round of benchmark results submissions. 

MLCommons
New #MLPerf training results are in, and #Nvidia's Blackwell GPUs continue to dominate across all six benchmarks. The computers built around the newest AMD GPU, MI325X, matched the performance of Blackwell’s predecessor on the most popular LLM fine-tuning benchmark. spectrum.ieee.org/mlperf-train...

Is Nvidia's Blackwell the Unst...
Nvidia’s Blackwell Conquers Largest LLM Training Benchmark

<p>New MLPerf training results put AMD’s MI325X accelerator on par with Nvidia’s H200</p>

IEEE Spectrum
MLPerf Training 5.0: Nvidia setzt sich mit GB200 NVL72 an die Spitze

In den MLPerf Training 5.0 Benchmarks setzt sich Nvidia mit GB200 NVL72 an die Spitze und demonstriert Leistung pro Dollar und Skalierung.

ComputerBase
MLPerf : des tests pour mesurer objectivement les offres d’IA

Développés par l’organisation à but non lucratif MLCommons, ces benchmarks évaluent les performances des infrastructures vendues pour entraîner ou inférer des IA.

LeMagIT.fr