Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece
Nvidia delivers first Vera Rubin AI GPU samples to customers — 88-core Vera CPU paired with Rubin GPUs with 288 GB of HBM4 memory apiece
288 GB HBM4 memory
jfc…
Looking at the spec’s… fucking hell these things probably cost over 100k.
I wonder if we’ll see a generational performance leap with LLM’s scaling to this much memory.
LLMs can already use way more I believe, they don’t really run them a single one of these things.
The HBM4 would likely be great for speed though.
Yeah they’re going to cost as much as a house.
I think we’ll see much larger active portions of larger MOEs, and larger context windows, which would be useful.
The non LLM models I run would benefit a lot from this, but I don’t know of I’ll ever be able to justify the cost of how much they’ll be.
It should at least be able to do cleaning and cooking.
So that’s what we need android girlfriends for.
So we can do what? De solder the individual ram chips and populate them on custom dimms?
Pass.
You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.
so that’s why my 5070 laptop has 8 GBs of VRAM…
my old 1080 also had 8 GBs of VRAM
And none of us will be allowed to have them
Only datacenters and only fortune 500 companies will be able to use anything Nvidia
I mean if you have the 3 million to spend on a rack of them, I am sure they would allow you to have them.
I do wonder what happens a few years down the road when everyone are replacing their gpus with latest and greatest variants what happens to the old racks? Do they get sold for pennies on the dollar because everyone else doing AI wants cutting edge?