As someone with experience in industrial power systems, datacenter design, and a little bit of computing, I find the discussions around CoreWeave’s (bad) earnings outlook fascinating. All the discussion measures compute in terms of megawatts and gigawatts rather than a measure of compute. In 5 years, when the tech underpinning AI is 8x or 16x faster than today (per watt), the value of today’s 800Mw worth of computer will be worth substantially less than 800Mw of compute in 5 years using the technology of that day. Or is this a tacit recognition that the operations per watt are not really changing and the growth is more about power density (how many watts can fit into a 2U server is really what’s increasing). I don’t feel like it’s the latter. Maybe it isn’t supposed to make sense? 🤷‍♂️
@jerry most of the discourse is focused on power, because current generation tech is much more power-dense than traditional DCs, and the presumption of the market is that meeting current and near term demand requires horizontal scale, and that we can’t possibly build fast enough. They’re just not thinking about future efficiency… it doesn’t fit their quarterly growth paradigm.
@systemalias that makes intuitive sense, but the focus on the reports I’m reading is on the growth, in Mw, of CoreWeave’s compute over the next few years. It might just be that the reporters have contorted a metric and taken it out of context.

@jerry so, put differently… short term incentives screw up all kind of analyses… but in effect, in this industry, revenue projections look good when you have capital and expertise to quickly deploy and integrate new gen platforms, but they’re so power hungry that access to power infrastructure is also a competitive supply chain concern.

No power: no GPUs. No talent to quickly operationalize: bad customer experience. No capital: get left behind on older generation.