As someone with experience in industrial power systems, datacenter design, and a little bit of computing, I find the discussions around CoreWeave’s (bad) earnings outlook fascinating. All the discussion measures compute in terms of megawatts and gigawatts rather than a measure of compute. In 5 years, when the tech underpinning AI is 8x or 16x faster than today (per watt), the value of today’s 800Mw worth of computer will be worth substantially less than 800Mw of compute in 5 years using the technology of that day. Or is this a tacit recognition that the operations per watt are not really changing and the growth is more about power density (how many watts can fit into a 2U server is really what’s increasing). I don’t feel like it’s the latter. Maybe it isn’t supposed to make sense? 🤷‍♂️
@jerry the topic feels like one of those thats supposed to tickle quarterly earnings or a prospectus or something vs be, like, informative or useful
@jerry it honestly feels like a dick measuring contest at this point.

@jerry If they were measuring building capacity in MW, that would make sense given that AFAIK it's the biggest compute-capacity bottleneck these days. (as a quantifiable limit due to finite HVAC and grid connection capacity)

But if they're using it as a measure of the actual hardware-in-the-rack compute? Yeah, they're either stupid, blatantly misleading investors with misleading units, or both.

@becomethewaifu yeah, they are at best conflating the two in media reports. I haven’t read their earnings statement to see if they are also doing that, but the reports are definitely talking about compute capacity in Mw. Now, I will tell you that it’s going to be super interesting to see what happens to the DCs as they age. I did work for a cloud provider and we found it more economical to build/rent new facilities to build out because the technology had changed so much that it didn’t make sense to spend the money to gut and start over. Plus if lets you soft move customers from the old to the new.

So yeah, I think fun times are ahead. Maybe they can become giant Walmarts.

@jerry most of the discourse is focused on power, because current generation tech is much more power-dense than traditional DCs, and the presumption of the market is that meeting current and near term demand requires horizontal scale, and that we can’t possibly build fast enough. They’re just not thinking about future efficiency… it doesn’t fit their quarterly growth paradigm.
@systemalias that makes intuitive sense, but the focus on the reports I’m reading is on the growth, in Mw, of CoreWeave’s compute over the next few years. It might just be that the reporters have contorted a metric and taken it out of context.

@jerry so, put differently… short term incentives screw up all kind of analyses… but in effect, in this industry, revenue projections look good when you have capital and expertise to quickly deploy and integrate new gen platforms, but they’re so power hungry that access to power infrastructure is also a competitive supply chain concern.

No power: no GPUs. No talent to quickly operationalize: bad customer experience. No capital: get left behind on older generation.

@jerry

Whether the first is happening and to what degree I won't argue.

However, the second point is definitely already happening. A "standard" datacenter rack was, not that long ago, limited to about 50 kW. The rise of GPUs in the datacenter led to that growing to 100 kW or more. The rise of LLMs and the corresponding increase of GPUs, and the drastically increasing power requirements of individual GPUs, means that "AI"-intended products like Nvidia's racks are up to 250 kW in some configurations. And they're talking about 1 MW racks in the near future.

That's downright scary. If some tech is working in the aisle and cooling fails, is she gonna get flash-fried before that rack can shut itself down?

@cazabon unless they go bonkers with cabling or buss bars, a 1MW rack is gonna need to be fed with something like 50Kv and you’re right- that is basically a flash bomb that makes crappy recipes

@jerry

I believe they are talking about big busbars, and high voltage to the individual chassis, with DC-DC converters to take it down to the ELV the GPUs use [edit: fix thinko].

800 VDC, I think I read? Maybe they're trying to benefit from the efficiencies of scale from electric vehicle components. I remember thinking "That's still over 1 kA", and that's a unit only a specialized niche of electrical engineers have much practical experience with.

@cazabon datacenter workers are going to need high voltage PPE 😂

@jerry

I don't envy them that! And nonferrous zippers and button snaps, come to think of it...

#Maxwell #JamesClerkMaxwell

@jerry isn't it that the hardest part of DC growth is powering the gear, not getting new equipment? So DCs are measured in MW, so finance people must measure MW for all projects?

Also leads to absolutely batshit claims like tomshardware.com/tech-industry…

Elon Musk says idling Tesla cars could create massive 100-million-vehicle strong computer for AI — 'bored' vehicles could offer 100 gigawatts of distributed compute power

Untapped ‘100 gigawatts of inference’ is a significant asset, Tesla boss tells investors.

Tom's Hardware
@silverwizard @jerry And of course Tesla would compensate the owners for the power used by this. And allow them to opt out if they needed the power for a journey. 🙈
@arafel @jerry I just think it's funny they're measuring battery capacity of a car as a metric of compute!
@silverwizard @jerry “it’s all GWh after all, it must be interchangeable.”
@jerry It's one of the many absolutely idiotic things about the "AI computing" bubble and mostly an artifact of the fact that LLMs are several orders of magnitude less power efficient than any common technology we've ever used. I hope things become more efficient, but also I hope everyone who buys the ridiculous industry shill lies that "actually AI doesn't use the much power" get a chance to hold a running H200.
@jerry My understanding (possibly wrong) is that we’ve reach a point where Moore’s Law no longer holds because the transistors are so tiny, which implies that compute/watt is not going to progress exponentially either.
@jerry This reminds of the weird fact that in the plans for green hydrogen, electrolyzer capacity is not measured in tons of annual H2 production but in MW of electric power used, which completely disregards any differences in efficiencies. It's like "How much money do we burn?" instead of "what do we actually get out of it?"