There's a lot of stuff going around about datacenters, so I decided to do a quick tour yesterday of some of the datacenters in the Salt Lake Valley. Some are indeed quite large, but there are a bunch of smaller ones too - and they are not always where you think!

All of these are publicly known, and you can find them (and ones in your own area) at https://www.datacentermap.com/ .

Let's start with a datacenter that I go by all the time! It's across the street from my grocery store in downtown #SLC. It's listed as a colocation facility; datacenters are famously secretive about who their tenants are, but we can guess that it probably hosts servers belonging to nearby businesses, especially ones that want their storage, etc. nearby, but don't want to have to maintain a secure, cooled room. Given the number of banks that have headquarters nearby, I'd bet at least some of them are customers.

This is a fairly little guy, with apparently 16k square feet of floorspace and 1.6MW of power.

Next, an even smaller datacenter, that just about anyone in #SLC has seen! This is XMission, a local Internet Service Provider that's been running since 1993, so one of The Ancients in Internet time. It's on a very busy part of 4th South, and if you've been by at night, you've seen the big LED display on the front of the building that they put various animations on.

One of the things that I *think* is probably in this building is SLIX: https://slix.net/traffic/ - this is an Internet Exchange Point (IXP), where various carriers meet up to exchange traffic without it having to travel long distances. These are often run as a sort of community infrastructure - it's in the best interests of all networks involved to connect to each other so that they can do their jobs more efficiently.

SLIX is fairly small (according to their own data they carry ~200Gbps, with some spikes up to 1Tbps). There are about 40 networks that meet there: https://slix.net/participants/ . Funny story, when I first got Google Fiber at my house, I was getting routed through California to get to the University of Utah campus just a few miles away. I pinged a guy I know who pinged a guy he knows who ... learned that some of the participants in SLIX didn't have their routes set up right. A config change later, and not only me, but basically everyone on any commercial ISP in the Salt Lake Valley had much more direct routes to campus!

This one is larger, physically (22.5 sq ft), than the first datacenter we looked it, but claims less power: 490kw. That's not a ton of power - my Chevy Bolt can draw 150kw from its batteries at max acceleration, and there are much bigger and sportier EVs that can draw something almost up in the range of this datacenter! (though only for brief periods of course! this datacenter probably draws a substantial fraction that much 24/7) Why is there so much less power for this datacenter?

Well, one of the key factors of datacenters is how power-dense they are: how much power they are designed to deliver to each rack, and how much heat they are capable of moving out.

Compute - especially GPU compute for AI - is incredibly power-dense and incredibly hot. So we can guess this datacenter is probably not for compute. If I had to guess, this is probably mainly intended as a "carrier hotel" - it's probably focused on having telecoms companies as tenants. I base this both on the lower power density, and where it is: it's near the Utah State Fairpark, which is in turn relatively close to train tracks heading both east-west and north-south. A lot of long-distance fiber in the US follows both the rail and Interstate road networks, because it's relatively straightforward to run fiber alongside transportation links. Salt Lake City lies on the west side of some of the few passes through the Rockies, so it has a ton of fiber, following I-80, the Union Pacific, etc. This is a good place for carrier hotels.

How is a carrier hotel different from an IXP? At an IXP, the carrier is just pulling in some fiber, maybe one or two routers or something. But they have a lot more equipment that they need than that - they have servers of various kinds too, plus the bigger backbone routers that fan out in many directions, etc. Mobile carriers have a fair amount of wired topology to deal with. That's the kind of stuff they put in carrier hotels, and this is a good spot for them.

I picked this shot because, in the background, you can see the Gadsby Power Plant, one of the main sources for power in #SLC. That's a natural gas plant that generates about 300MW. Put a pin in that number, we'll come back to it later.

Now we're getting a bit bigger, and also more residential. This one sits on the edge of a residential neighborhood, on 200 E, in Milcreek. This is a 36k sq ft, 1.9 MW facility. What's in there? I don't know, as mentioned above, datacenters don't tend to tell you who their tenants are. There's probably some reasonable computing power in there, but it's probably not dense enough to be very GPU-heavy.

The sounds of the HVAC systems were quite noticeable at this one. Any time you are dealing with electricity, you are also dealing with heat. In a datacenter, the power drawn by the compute and network equipment gets turned into heat, and you need to get rid of it. Of course, you want to spend as little electricity getting rid of heat as you can. Datacenters call this "Power Usage Effectiveness", commonly called PUE. A PUE of 1.5 means that for every KW that goes to computers, .5 KW goes to other stuff - mostly cooling, but also heat losses, lighting, etc. A 1.5 PUE is pretty good, supposedly some of the biggest datacenters have PUE of around 1.1 .

This actually highlights one way in which having a fairly large-scale datacenter is efficient: putting all the computers in one place does enable you to use cooling systems that get rid of more heat for less power. Of course, how many computers you have, where your power is coming from, what mechanisms you use to cool them, etc. matters too! Again, we'll get back to that later.

By the way, my guess would be that only the building in the front is a datacenter - the building in the back has too many truck bays and not enough cooling. It's probably a small warehouse of some sort.

This is the biggest datacenter I visited - campus, actually. This facility is in West Jordan, near the South Valley Regional Airport. It's big enough that I have to post several pictures to get you a real sense of the size (but it's not the biggest datacenter in Utah.)

What you're looking at here is three buildings that, together, have a power capacity that's reported (depending on the source) to be around 160 MW (put a pin in that number too.)

Two of these buildings are multi-tenant (the ones with the flat white roofs) like the others we've seen.

That third one in the back, with all of the cooling towers on top, has supposedly been built for a single hyperscaler, and is supposedly something like an 80-100MW building. Which hyperscaler? That information is not public. That's a whole lot of cooling on the roof (which is reported to be water-free), so my money would be on this being an AI data center.

In these pictures, you can see more electrical infrastructure. Bringing that much power into one place takes a lot of wires.

The reason I went on this little tour was to put in perspective the proposed Stratos datacenter project in Box Elder County, UT.

Stratos is supposedly designed to eventually reach a size of 9 GW. That is more than double the 4 GW that the entire state of Utah currently uses. The entire campus is supposed to be big enough that, for comparison, it would fill over 10% of the Salt Lake Valley, as shown in this image (which I didn't make).

That last datacenter campus? At ~160 MW, those three buildings put together are designed for a load about 1/55th the size of Stratos. That 300 MW natural gas power station we saw in the background? Stratos is supposed to generate its own power on-site, so it will need 30 of those things. (Or maybe more - remember PUE?)

In terms of carbon output, this thing is designed to be an absolute monster.

There's not much getting around that. They have handwaved about including solar and/or wind, but without anything concrete, we should assume this is a whole lot of carbon.

How about water?

Well, that's harder to tell, given all the vagaries and "if"s in the public information so far.

Remember, a datacenter has to get rid of a lot of heat. A datacenter that is generating its own energy on-site has to get rid of *far* more heat.

In the desert West, the most *energy* efficient way of getting rid of heat in the hot summer months is evaporative cooling: you boil water. This has, historically, been a major way of cooling both natural gas plants and datacenters, as well as homes, etc.

The same reason why this works well in the west is the same reason why it's problematic: we have very dry air, so evaporative cooling is very effective, but having dry air is connected to the fact that we don't have much water to begin with.

There *are* ways to air-cool natural gas turbines, and there *are* ways to cool datacenters that are not evaporative cooling. They are more *water* efficient. But they are less *power* efficient, which means, in this context, burning even more natural gas.

The backers of Stratos claim that they are trying to get some very new, high-tech gas turbines that operate without water cooling, or at least with very little. That does assuage some water concerns. But their language is very hedge-y - they're trying, they hope to jump in line for the limited supply of them, etc.

They also claim they will use "closed loop" water systems for cooling the datacenter. There are several things this *could* mean, and we need to know more in order to actually understand it. Most cooling systems for datacenters and even large buildings have a closed loop of water (or another coolant) for moving heat around. That's because we cannot *make* cold, we can only *move* heat. In some datacenters, this cold loop comes into the room, where it's used to cool air, which is blown across the servers. In higher-power-density datacenters, the coolant loop comes all the way to the individual rack in order to cool the air right before it enters the servers. In the most high-tech datacenters (which Stratos would likely be), it comes all the way *inside* the server, directly exchanging heat with the hot bits like CPUs and GPUs.

Coolant in these kinds of systems circulates, it's closed, you can generally consider the coolant loop to consume very little to no water after it's been filled.

But: you still have to make the heat go away somehow. This is where Stratos *might* use evaporative cooling. Or they might opt for one of the more expensive, less energy efficient dry systems. Saying "we have a closed loop" only tells us *part* of the story!

Here's what we know: the Stratos people have secured 13,000 acre-feet of water rights. In numbers that mean more to most of us, that's about 4 billion gallons per year.

They *claim* that's far more water than they need, and they won't use most of it.

But: if they don't manage to get their air-cooled gas turbines (which, in addition to being less efficient, also cost more), or decide to go with some evaporative cooling for the datacenter (because it's cheaper and uses less power), they could very easily use that much water. We are very much in a "trust me" situation, and it's not clear that we *should* trust what developers say when they are trying to get permits. We need to get independent studies and binding contracts.

For those who aren't locals, you might not be aware, but: the Great Salt Lake is shrinking. People are trying (not hard enough, probably) to save it. Not just because hey, what would we call our city without it, but also because the lakebed is full of chemicals we'd rather not be breathing in, thanks.

Stratos would not literally pull water out of the lake (which it is quite close to). But: the water rights they have obtained are in the watershed of the lake. So: if they use the water rights they have obtained, they might well contribute to the drying up of the lake.

The point here is that: they are hoarding water rights that they claim they will not use - the more reasonable bet is to assume they will use them; we need a study by actual hydrologists to understand whether using the water would accelerate the lake's demise.

And, you will notice that I have not even touched on a ton of *other* issues, such as:

1) Is there actually demand for all of these computers?
2) Would it be a good idea to fill this demand even if it does exist?
3) Can we build enough computers to fill this thing in a reasonable time anyway?
4) How far will this project get before the AI bubble pops, and will it leave anyone other than the investors holding the bag?
5) If it does get fully built, what other resources (like more water rights) might they go after?
6) Is it a wise idea to provide huge tax breaks to companies that expect to be highly profitable?
7) This is being done though the Military Installation Development Authority - what's the actual military connection here?
8) Regardless of whether it's wet or dry, is dumping this much heat into one valley a good idea?
9) There's no way that burning that much natural gas doesn't raise gas and electricity prices.
10) Can we trust the developers' numbers for how many jobs this will create locally?

Just to name a few.

Here's what I hope your takeaway from this thread will be: datacenters come in many sizes, have many uses, and are not necessarily where you'd expect. The impact they have locally depends on how they're powered, how they're cooled, what they're used for, who owns them, and how big they are. It's worth looking at all of these things when considering whether a datacenter project is a good idea or not.
Closed-loop cooling systems save water, but can be a drain on electricity - KSLTV.com

While closed-loop cooling systems, like the one being touted for a large data center in Box Elder County can save lots of water, they often use more electricity in return, which can impact the environment in other ways, according to Dr. Ricci, a professor in the University of Utah's school of computing.

KSLTV.com
@ricci
Interesting! So do you use distilled water for the closed loop?

@katrinakatrinka

I don't know the exact level of purity they go for, but yeah, removing things that could leave mineral deposits or cause corrosion is important

It is often mixed with glycol to lower the freezing point (no idea what Stratos would do, they have given us nowhere near that level of detail)

@ricci
I use a CPAP and was thinking of the kind of water I need in that. Adding something to lower the freeze point is also interesting.