Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece

https://lemmy.world/post/43583117

Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece - Lemmy.World

Lemmy

Nvidia’s Vera Rubin platform is the company’s next-generation architecture for AI data centers that includes an 88-core Vera CPU, Rubin GPU with 288 GB HBM4 memory, Rubin CPX GPU with 128 GB of GDDR7, NVLink 6.0 switch ASIC for scale-up rack-scale connectivity, BlueField-4 DPU with integrated SSD to store key-value cache, Spectrum-6 Photonics Ethernet, and Quantum-CX9 1.6 Tb/s Photonics InfiniBand NICs, as well as Spectrum-X Photonics Ethernet and Quantum-CX9 Photonics InfiniBand switching silicon for scale-out connectivity.

288 GB HBM4 memory

jfc…

Looking at the spec’s… fucking hell these things probably cost over 100k.

I wonder if we’ll see a generational performance leap with LLM’s scaling to this much memory.

LLMs can already use way more I believe, they don’t really run them a single one of these things.

The HBM4 would likely be great for speed though.

Current models are speculated at 700 billion parameters plus. At 32 bit precision (half float), that’s 2.8TB of RAM per model, or about 10 of these units. There are ways to lower it, but if you’re trying to run full precision (say for training) you’d use over 2x this, something like maybe 4x depending on how you store gradients and updates, and then running full precision I’d reckon at 32bit probably. Possible I suppose they train at 32bit but I’d be kind of surprised.

Yeah they’re going to cost as much as a house.

I think we’ll see much larger active portions of larger MOEs, and larger context windows, which would be useful.

The non LLM models I run would benefit a lot from this, but I don’t know of I’ll ever be able to justify the cost of how much they’ll be.

Lol, this was literally my exact response

lemmy.world/comment/22356808

I feel you man.

Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece - Lemmy.World

Lemmy

The buzzwords make my head hurt. Sounds like a copypasta
Almost like an LLM wrote it…
This is what all the parts we wanted went to
Yeah, I wonder how long it will take them to clue in that no one wants to trade gaming for an AI fucking girlfriend ffs…
Until the money stops pouring in I suppose
I mean if they came with a cool android body we could talk about it. It should at least be able to do cleaning and cooking. Otherwise my wife won’t like it.

It should at least be able to do cleaning and cooking.

So that’s what we need android girlfriends for.

Don’t worry, you can rent them for $30 a month and stream all your video games.
Not even, just the ones they deign to allow
Goodbye, sweet hardware. You deserved better and so did we.
Question is, how long before it makes it to the next DGX Spark? Some people don’t have $10B to burn.
Can’t wait for it to hit secondhand market in november

So we can do what? De solder the individual ram chips and populate them on custom dimms?

Pass.

Bringus is gonna make a weird gaming computer by shoving one into a movie rental kiosk.

You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.

https://overclock3d.net/news/gpu-displays/chinese-developers-create-modified-48gb-nvidia-rtx-4090d-and-32gb-rtx-4080-super-gpus-for-the-ai-cloud/

China custom memory-doubled Nvidia RTX GPUs for the AI cloud - OC3D

Cloud AI servers in China are now using custom modded memory-doubled GeForce RTX GPUs. This includes 48GB RTX 4090D models.

OC3D

so that’s why my 5070 laptop has 8 GBs of VRAM…

my old 1080 also had 8 GBs of VRAM

Your 5070 laptop has 8gb of vram? My desktop 3060 has 12gb of vram and its not even the TI version.
GeForce RTX 50 series - Wikipedia

Jesus fucking Christ, 288GB. And this is why I can’t have 16?
And you have to buy a rack of them with 72 of them.
But can it run Crysis?
Can it run Doom?

And none of us will be allowed to have them

Only datacenters and only fortune 500 companies will be able to use anything Nvidia

I mean if you have the 3 million to spend on a rack of them, I am sure they would allow you to have them.

I do wonder what happens a few years down the road when everyone are replacing their gpus with latest and greatest variants what happens to the old racks? Do they get sold for pennies on the dollar because everyone else doing AI wants cutting edge?

The failure rate is high for ML GPUs. The hardware is effectively consumables.
You can’t do much with them, unless you’re into deep leaning. And the power bill would bankrupt you. I wish I had a Cerebras box, but even the smallest one is 20 kW, liquid cooled.
Brick them all 🧱
THIS is why we can’t have nice things…
So this is where our future ram buy went into? Fuck this planet then 🤣
HBMx is a different product than DDRx/GDDRx, though parts of the fabbing are probably shared.