@davidgerard 30 megalitres a day? I started shouting WTF at the screen incessantly when I heard that figure (in bushel-inches or whatever in your podcast)
@Unixbigot @davidgerard Drink 30 megalitres of water a day? 
@catsalad @davidgerard β€œ2-8 million gallons a day” is the alleged water use of google’s AI datacentre according to https://youtu.be/iZ4WnCavOIg
How much water do the data centres use? It’s a secret!

YouTube
@Unixbigot @catsalad not even alleged, it's the *admitted* planned usage

@davidgerard @Unixbigot @catsalad to put this number in actual perspective:
The annual rainfall in Florida is about 56 inches. USGS' calculator only lets you use 50.

So less than the annual rainfall in Florida is 46,596,264,000,000 gallons. Google is admitting to using 2,920,000,000 gallons per year before waste and losses.
That's over 6% of all annual rainfall. In FLORIDA.

That is more than enough to put aquifers into terminal decline.

(edit: yush math is wrong, I mathed pre-coffee.)

@rootwyrm @davidgerard @Unixbigot @catsalad Something is off with this math. 2,920,000,000 is not 6% of 46,596,264,000,000.

@jeffreyolivier @davidgerard @Unixbigot @catsalad yeah, doesn't surprise me, pre-coffee and thrown into Excel.
But it's a not insignificant number to say the least. NEORSD is one of the largest systems in the country and treats 200M gallons per day. A single Google datacenter *consumes* >4% of that. (The consume is important.)

So at 73 admitted data centers that's >584M GPD burned. I did the full math with other use but I think those notes are on the dead PC.

@jeffreyolivier @davidgerard @Unixbigot @catsalad basically works out that the 8M GPD number is total bullshit top to bottom (IIRC it's +15% for leak and loss, so 9.2M+ per) but that's also per building so it's more like 27.6M+ consumed per site. And it's fresh to gray with zero treatment, so it's straight extraction from the water table.

@jeffreyolivier @davidgerard @Unixbigot @catsalad a better comparison is probably this:

In 2020 the entire nuclear power industry withdrew (used) 42,045MG per day using the most pessimistic numbers. But actual *consumption* (loss) was 946MGD. Or ~2.2%. The remaining 97.8% is returned to the water table and a small portion of the 2.2% is evaporative.

73 Google data centers consume a minimum of 671MGD, or more than 70% of the entire nuclear power industry in the United States.

@rootwyrm @jeffreyolivier @Unixbigot @catsalad in practice with data centres, "returned to the water table" means "doing the minimum you can get away with before dumping it, down to nothing at all if you don't think you'll get caught"

just pumping the water back into the stream, but now it's contaminated and 40 deg C

@davidgerard @jeffreyolivier @Unixbigot @catsalad anything serious is using a much more complex system than that. Typically 'wet' chilled glycol. (Properly designed ones use rack closed-circuit high capacity fluid with heat exchange to closed circuit glycol.) IOW, they're cutting a whole lot of corners that *also* negatively impact performance.
@rootwyrm @jeffreyolivier @Unixbigot @catsalad shocked to hear the Temu Data Centre industry running Nvidia near-future e-waste might *cut corners*
@davidgerard @jeffreyolivier @Unixbigot @catsalad I really want to get my hands on one of the newer QDC shitboxes. The old QDC shitboxes were something below 40/60 glycol/air (meaning less than 40% of heat capacity from liquid!) I have a feeling the new stuff is just LAUGHABLY bad at actual *transfer* and is just using ginormous chunks of copper as a crutch.
Transfer's the really shitty, expensive, and hard part.
They're probably really stupid on fluid choice too.

@davidgerard @jeffreyolivier @Unixbigot @catsalad to give some idea on good vs. bad: Dell C6525's are probably over 80/20 fluid/air at 5.2kW/2U. Very respectable numbers, fairly well designed. (Should've used a monoblock design for several reasons.)

Most of the A100 SXM boards I've been able to look at are using the *same* blocks in series with over 200W of *fan* draw just for the power section.

@davidgerard okay my brain's gotta go into mono vs modular.

Basically, per-die blocks are only locally efficient and create risk points (leaks.) All your power heat (which is SUBSTANTIAL) is left to be cooled by air. Which takes a LOT of power in fans.

The reason power stages were typically left to air was because you were talking minimal waste heat. 300W (AMD) TDP EPYCs are fine with dinky little aluminum sinks and 200-300LFM! Hell, 300W CPUs are doable with air alone.

@davidgerard and this is where it all falls apart and I start going "WTF NO."
NV claims the GB100 is 1kW TDP. A *kilowatt* of heat *per die*. So you're going from ~650W in a chassis to 10kW+. Never mind the other thermodynamics problems, that is a FUCKTON of waste heat from power. Over 250W per die, times 8 dies, that's 2kW of waste. You don't remove 100% ever, but doing it with air means more and more power is effectively wasted trying to keep that heat in check.