There's a lot of stuff going around about datacenters, so I decided to do a quick tour yesterday of some of the datacenters in the Salt Lake Valley. Some are indeed quite large, but there are a bunch of smaller ones too - and they are not always where you think!

All of these are publicly known, and you can find them (and ones in your own area) at https://www.datacentermap.com/ .

Let's start with a datacenter that I go by all the time! It's across the street from my grocery store in downtown #SLC. It's listed as a colocation facility; datacenters are famously secretive about who their tenants are, but we can guess that it probably hosts servers belonging to nearby businesses, especially ones that want their storage, etc. nearby, but don't want to have to maintain a secure, cooled room. Given the number of banks that have headquarters nearby, I'd bet at least some of them are customers.

This is a fairly little guy, with apparently 16k square feet of floorspace and 1.6MW of power.

Next, an even smaller datacenter, that just about anyone in #SLC has seen! This is XMission, a local Internet Service Provider that's been running since 1993, so one of The Ancients in Internet time. It's on a very busy part of 4th South, and if you've been by at night, you've seen the big LED display on the front of the building that they put various animations on.

One of the things that I *think* is probably in this building is SLIX: https://slix.net/traffic/ - this is an Internet Exchange Point (IXP), where various carriers meet up to exchange traffic without it having to travel long distances. These are often run as a sort of community infrastructure - it's in the best interests of all networks involved to connect to each other so that they can do their jobs more efficiently.

SLIX is fairly small (according to their own data they carry ~200Gbps, with some spikes up to 1Tbps). There are about 40 networks that meet there: https://slix.net/participants/ . Funny story, when I first got Google Fiber at my house, I was getting routed through California to get to the University of Utah campus just a few miles away. I pinged a guy I know who pinged a guy he knows who ... learned that some of the participants in SLIX didn't have their routes set up right. A config change later, and not only me, but basically everyone on any commercial ISP in the Salt Lake Valley had much more direct routes to campus!

This one is larger, physically (22.5 sq ft), than the first datacenter we looked it, but claims less power: 490kw. That's not a ton of power - my Chevy Bolt can draw 150kw from its batteries at max acceleration, and there are much bigger and sportier EVs that can draw something almost up in the range of this datacenter! (though only for brief periods of course! this datacenter probably draws a substantial fraction that much 24/7) Why is there so much less power for this datacenter?

Well, one of the key factors of datacenters is how power-dense they are: how much power they are designed to deliver to each rack, and how much heat they are capable of moving out.

Compute - especially GPU compute for AI - is incredibly power-dense and incredibly hot. So we can guess this datacenter is probably not for compute. If I had to guess, this is probably mainly intended as a "carrier hotel" - it's probably focused on having telecoms companies as tenants. I base this both on the lower power density, and where it is: it's near the Utah State Fairpark, which is in turn relatively close to train tracks heading both east-west and north-south. A lot of long-distance fiber in the US follows both the rail and Interstate road networks, because it's relatively straightforward to run fiber alongside transportation links. Salt Lake City lies on the west side of some of the few passes through the Rockies, so it has a ton of fiber, following I-80, the Union Pacific, etc. This is a good place for carrier hotels.

How is a carrier hotel different from an IXP? At an IXP, the carrier is just pulling in some fiber, maybe one or two routers or something. But they have a lot more equipment that they need than that - they have servers of various kinds too, plus the bigger backbone routers that fan out in many directions, etc. Mobile carriers have a fair amount of wired topology to deal with. That's the kind of stuff they put in carrier hotels, and this is a good spot for them.

I picked this shot because, in the background, you can see the Gadsby Power Plant, one of the main sources for power in #SLC. That's a natural gas plant that generates about 300MW. Put a pin in that number, we'll come back to it later.

Now we're getting a bit bigger, and also more residential. This one sits on the edge of a residential neighborhood, on 200 E, in Milcreek. This is a 36k sq ft, 1.9 MW facility. What's in there? I don't know, as mentioned above, datacenters don't tend to tell you who their tenants are. There's probably some reasonable computing power in there, but it's probably not dense enough to be very GPU-heavy.

The sounds of the HVAC systems were quite noticeable at this one. Any time you are dealing with electricity, you are also dealing with heat. In a datacenter, the power drawn by the compute and network equipment gets turned into heat, and you need to get rid of it. Of course, you want to spend as little electricity getting rid of heat as you can. Datacenters call this "Power Usage Effectiveness", commonly called PUE. A PUE of 1.5 means that for every KW that goes to computers, .5 KW goes to other stuff - mostly cooling, but also heat losses, lighting, etc. A 1.5 PUE is pretty good, supposedly some of the biggest datacenters have PUE of around 1.1 .

This actually highlights one way in which having a fairly large-scale datacenter is efficient: putting all the computers in one place does enable you to use cooling systems that get rid of more heat for less power. Of course, how many computers you have, where your power is coming from, what mechanisms you use to cool them, etc. matters too! Again, we'll get back to that later.

By the way, my guess would be that only the building in the front is a datacenter - the building in the back has too many truck bays and not enough cooling. It's probably a small warehouse of some sort.

This is the biggest datacenter I visited - campus, actually. This facility is in West Jordan, near the South Valley Regional Airport. It's big enough that I have to post several pictures to get you a real sense of the size (but it's not the biggest datacenter in Utah.)

What you're looking at here is three buildings that, together, have a power capacity that's reported (depending on the source) to be around 160 MW (put a pin in that number too.)

Two of these buildings are multi-tenant (the ones with the flat white roofs) like the others we've seen.

That third one in the back, with all of the cooling towers on top, has supposedly been built for a single hyperscaler, and is supposedly something like an 80-100MW building. Which hyperscaler? That information is not public. That's a whole lot of cooling on the roof (which is reported to be water-free), so my money would be on this being an AI data center.

In these pictures, you can see more electrical infrastructure. Bringing that much power into one place takes a lot of wires.

The reason I went on this little tour was to put in perspective the proposed Stratos datacenter project in Box Elder County, UT.

Stratos is supposedly designed to eventually reach a size of 9 GW. That is more than double the 4 GW that the entire state of Utah currently uses. The entire campus is supposed to be big enough that, for comparison, it would fill over 10% of the Salt Lake Valley, as shown in this image (which I didn't make).

That last datacenter campus? At ~160 MW, those three buildings put together are designed for a load about 1/55th the size of Stratos. That 300 MW natural gas power station we saw in the background? Stratos is supposed to generate its own power on-site, so it will need 30 of those things. (Or maybe more - remember PUE?)

In terms of carbon output, this thing is designed to be an absolute monster.

There's not much getting around that. They have handwaved about including solar and/or wind, but without anything concrete, we should assume this is a whole lot of carbon.

How about water?

Well, that's harder to tell, given all the vagaries and "if"s in the public information so far.

Remember, a datacenter has to get rid of a lot of heat. A datacenter that is generating its own energy on-site has to get rid of *far* more heat.

In the desert West, the most *energy* efficient way of getting rid of heat in the hot summer months is evaporative cooling: you boil water. This has, historically, been a major way of cooling both natural gas plants and datacenters, as well as homes, etc.

The same reason why this works well in the west is the same reason why it's problematic: we have very dry air, so evaporative cooling is very effective, but having dry air is connected to the fact that we don't have much water to begin with.

There *are* ways to air-cool natural gas turbines, and there *are* ways to cool datacenters that are not evaporative cooling. They are more *water* efficient. But they are less *power* efficient, which means, in this context, burning even more natural gas.

The backers of Stratos claim that they are trying to get some very new, high-tech gas turbines that operate without water cooling, or at least with very little. That does assuage some water concerns. But their language is very hedge-y - they're trying, they hope to jump in line for the limited supply of them, etc.

They also claim they will use "closed loop" water systems for cooling the datacenter. There are several things this *could* mean, and we need to know more in order to actually understand it. Most cooling systems for datacenters and even large buildings have a closed loop of water (or another coolant) for moving heat around. That's because we cannot *make* cold, we can only *move* heat. In some datacenters, this cold loop comes into the room, where it's used to cool air, which is blown across the servers. In higher-power-density datacenters, the coolant loop comes all the way to the individual rack in order to cool the air right before it enters the servers. In the most high-tech datacenters (which Stratos would likely be), it comes all the way *inside* the server, directly exchanging heat with the hot bits like CPUs and GPUs.

Coolant in these kinds of systems circulates, it's closed, you can generally consider the coolant loop to consume very little to no water after it's been filled.

But: you still have to make the heat go away somehow. This is where Stratos *might* use evaporative cooling. Or they might opt for one of the more expensive, less energy efficient dry systems. Saying "we have a closed loop" only tells us *part* of the story!

Here's what we know: the Stratos people have secured 13,000 acre-feet of water rights. In numbers that mean more to most of us, that's about 4 billion gallons per year.

They *claim* that's far more water than they need, and they won't use most of it.

But: if they don't manage to get their air-cooled gas turbines (which, in addition to being less efficient, also cost more), or decide to go with some evaporative cooling for the datacenter (because it's cheaper and uses less power), they could very easily use that much water. We are very much in a "trust me" situation, and it's not clear that we *should* trust what developers say when they are trying to get permits. We need to get independent studies and binding contracts.

For those who aren't locals, you might not be aware, but: the Great Salt Lake is shrinking. People are trying (not hard enough, probably) to save it. Not just because hey, what would we call our city without it, but also because the lakebed is full of chemicals we'd rather not be breathing in, thanks.

Stratos would not literally pull water out of the lake (which it is quite close to). But: the water rights they have obtained are in the watershed of the lake. So: if they use the water rights they have obtained, they might well contribute to the drying up of the lake.

The point here is that: they are hoarding water rights that they claim they will not use - the more reasonable bet is to assume they will use them; we need a study by actual hydrologists to understand whether using the water would accelerate the lake's demise.

And, you will notice that I have not even touched on a ton of *other* issues, such as:

1) Is there actually demand for all of these computers?
2) Would it be a good idea to fill this demand even if it does exist?
3) Can we build enough computers to fill this thing in a reasonable time anyway?
4) How far will this project get before the AI bubble pops, and will it leave anyone other than the investors holding the bag?
5) If it does get fully built, what other resources (like more water rights) might they go after?
6) Is it a wise idea to provide huge tax breaks to companies that expect to be highly profitable?
7) This is being done though the Military Installation Development Authority - what's the actual military connection here?
8) Regardless of whether it's wet or dry, is dumping this much heat into one valley a good idea?
9) There's no way that burning that much natural gas doesn't raise gas and electricity prices.
10) Can we trust the developers' numbers for how many jobs this will create locally?

Just to name a few.

Here's what I hope your takeaway from this thread will be: datacenters come in many sizes, have many uses, and are not necessarily where you'd expect. The impact they have locally depends on how they're powered, how they're cooled, what they're used for, who owns them, and how big they are. It's worth looking at all of these things when considering whether a datacenter project is a good idea or not.
Closed-loop cooling systems save water, but can be a drain on electricity - KSLTV.com

While closed-loop cooling systems, like the one being touted for a large data center in Box Elder County can save lots of water, they often use more electricity in return, which can impact the environment in other ways, according to Dr. Ricci, a professor in the University of Utah's school of computing.

KSLTV.com
@ricci
Interesting! So do you use distilled water for the closed loop?

@katrinakatrinka

I don't know the exact level of purity they go for, but yeah, removing things that could leave mineral deposits or cause corrosion is important

It is often mixed with glycol to lower the freezing point (no idea what Stratos would do, they have given us nowhere near that level of detail)

@ricci
I use a CPAP and was thinking of the kind of water I need in that. Adding something to lower the freeze point is also interesting.
@ricci I am reminded of a story that I think Kurt Vonnegut told about his brother. Someone commented that the brother’s desk was a mess. He gestured to his head and said ‘If you think this desk is a mess, you should see what it’s like in here’.
@richardinsandy my collection of spherical objects is clearly visible, dunno what that says about what's in my head
@ricci I still have facebook to stay in touch with family in the UK. Second item in my feed this morning was KSL5’s interview with you, Rob.
@richardinsandy so sorry for inserting myself into their lives in this fashion
@ricci …something about a clean desk being the sign of an empty mind…

@wiersdorf This is a good question, so I looked up some numbers.

Two different sources get me something like 3 acre-feet of water per acre for alfalfa in that area: a 1994 report from Utah State: https://waterrights.utah.gov/docSys/v912/a912/a912044e.pdf (see SNWV in Figure 2), and a listing of a huge farm for sale in the Snowville area now: https://www.land.com/property/6034-acres-in-box-elder-county-utah/4545825/ - claims 3895 acres under irrigation using 11.7k acre-feet of water rights.

So that would mean that the Stratus project has secured enough water rights to farm about 4.3k acres of alfalfa (which is about 10% of the land they say they have access to).

So: this is not nearly enough water to farm all of that area with alfalfa (is all of it even suitable for this purpose? no idea), but enough for a big chunk of it.

Of course, Utah is already watering vastly more alfalfa than we can afford to with our limited water resources.

@ricci wait, most if not all DCs use open loop cooling systems?
@mdione Most (probably all) DCs will use *a* closed loop where they circulate coolant (probably water, maybe mixed with glycol) to get heat out of the room (or directly off the chips). From there, many use systems that consume water to get that heat out into the environment. It's relatively new that large datacenters are trying to use entirely waterless systems on that side
@ricci Thanks for the detailed description. I saw something about the residents trying to fight this but didn’t know the crazy scale.
@EricFielding Also, a thing I didn't mention is that this is not the only datacenter of this size being proposed in the state. There's another one just as big being discussed for central Utah.
@ricci This is REALLY thoughtful and informative; thank you. (And it's worth saving/sharing even outside Mastodon, so: hey, @mastoreaderio ! Unroll!)

@msbellows here's the unrolled thread: https://mastoreader.io?url=https%3A%2F%2Fc.im%2F%40msbellows%2F116557139885627239

Next time, kindly set the visibility to 'Mentioned people only' and mention only me (@mastoreaderio). This ensures we avoid spamming others' timelines and threads unless you intend for others to see the unrolled thread link as well.

Thank you!

Masto Reader

@mastoreaderio
Well, this is handy
@msbellows
@jherazob @mastoreaderio Ain't it? I just wish it wouldn't scold me every time I trigger it publicly. I WANT both the request and the resulting link to be public so other people can learn about/benefit from them!
@ricci naive question, wouldn't building large amounts of solar panel be more energy efficien

@gerbrandvd I don't know the exact math on this, unfortunately. What I do know though is that you'd need both solar *and* storage, in a setting like this where they're generating all of their own power on-site, they'd need to generate far more power than they need during the brightest hours of the day, then use it overnight and/or when it's overcast.

Then there's also the fact that one generates far less power from solar during the winter when the days are shorter and the angle of the sun in the sky is less favorable (and that you'd have to clear the panels of snow).

Solar might be a reasonably good match for the cooling part of the load; you can sometimes get away with using outside air when it's cool enough (winter and sometimes at night) but would be a lot harder to make work for the actual computing load, since that's going to run 24/7/365 (esp. if this is used for AI training)

@ricci Thank you for the overview.
What I don't understand is, why build data centers in areas with warmer climates, when colder ones would be... well, easier to cool?

Aren't economic and ecologic incentives aligned here?
Data centers for compute in particular (as opposed as, for response time) don't need to be in any particular geographical area anyway, do they?

@phairupegiont You are correct! It's generally easier to cool things down when it's colder outside! Here in northern Utah some datacenters - with far less power density than this one - are able to just use outside air to cool for a good chunk of the year. With the kinds of heat loads generated by warehouses of GPUs; well, I suspect their cooling needs are indeed lower in the winter, but they probably need active cooling all the time anyway.

There are some datacenters in Finland that even use cold seawater as part of their cooling systems!

So, why build in places that get hot part of the year? Well, if you are willing and able to use water for evaporative cooling, that's pretty effective in very dry environments - and can be cheap depending on the cost of water. Sometimes, the availability of power is a big thing too - in this case, there is an existing natural gas pipeline running through the valley that they intend to tap. For some kinds of datacenters, it's important to be near your users - though that's less important for AI training, which is what this one would likely be used for.

@ricci great thread, thanks!
@ricci great thread, Rob. I worked for one of the big cloud providers and got to spent quite a bit of time with the datacenter team before I retired. I was blown away by the innovations being implemented to reduce power and cooling requirements. All of that is moot now. We were talking about the potential of 25kW racks. Now they’ve completely blown past that with 100kW racks. It’s insane.
@ricci great thread, it really puts things in perspective!

@ricci Jesus H Christ that’s massive. I’ve always objected to the « water use » reservations regarding data centres, but purely from a UK perspective where:

a) onsite generation is generally a no-no
b) evaporative cooling isn’t required, and generally a no-no

This hits home the difference in planning and land use regulations between the UK and US. This would never be considered here.

@af

Yes, this is absolutely massive.

As far as on-site generation goes, this is *sort* of in the kind of area where one might built a power plant in the first place, it's pretty remote. So I think the issues have more to do with carbon emissions, the heat load in a high-desert valley, and the scale than with the fact that it's on site per se.

Evaporative cooling is *much* more effective here where our humidity is basically a rounding error away from zero. But yeah, we are very much out of water, and we need to not take the developers word for it that they don't intend to use it.

@ricci Thanks so much for putting this together.

@ricci Christ. I lived in SLC in the early 70s. Droughts etc etc. And that was with less population and...

I hate everything right now,

@ricci

Why are we building housing for computers and not for people?

@darwinwoodka just imagine what we could do if we put these kinds of resources to other uses

@darwinwoodka @ricci

Venture capitalists and pension funds think it’s more profitable to do the former than the latter.

@ricci
We need to ban new evap-cooled DCs.
Air-side economy is more efficient and doesn't use water (except for humidity). Though you wouldn't put such a DC in Utah, but rather in places with consistent wind, and few high-heat events.

There may someday be a point where we can just use our renewable energy abundance to use CO2-refrigerant DX to cool large datacenters: inefficient, but no water use, and works anywhere.

@ricci
Currently this area is remote ranching country, served by one two-lane road and no businesses like gas stations or stores. In addition to building the data center, they will have to build ALL the infrastructure needed to support it. Where are the workers going to live?

@Dougfir In county council meetings, they've claimed they are going to build some hotels for contractors and restaurants, etc. Probably on the parcel of land they got right off I-84. But they seem to expect that on-site staff (which I think they are likely overestimating to make it look more attractive) will live in Brigham City, Snowville, etc.

The area already has a similar problem with the rocket plant at Promontory point. Both my brothers did internships there, and they had to get up super early to take a company bus out there from Brigham City.

@ricci
A lot of the goldmines in Nevada are remote so there are busses running crews back and forth from towns all the time.
I still don't believe their handwaving about being able to source that much power generation capacity that quickly.
@Dougfir yep, it seems extremely unlikely, and I'm not inclined to take the word of another guy who plays a businessman on TV

@Dougfir @ricci

Once it’s built there won’t be many workers on site.

@jonhendry @Dougfir The developer claims 2,000 workers on site after construction, a number that seems overly optimistic
@ricci beside using electricity and water, data centers contribute to heating up the local environment . Curious to know how much effect the large ones will have...
https://arxiv.org/abs/2603.20897
The data heat island effect: quantifying the impact of AI data centers in a warming world

The strong and continuous increase of AI-based services leads to the steady proliferation of AI data centres worldwide with the unavoidable escalation of their power consumption. It is unknown how this energy demand for computational purposes will impact the surrounding environment. Here, we focus our attention on the heat dissipation of AI hyperscalers. Taking advantage of land surface temperature measurements acquired by remote sensing platforms over the last decades, we are able to obtain a robust assessment of the temperature increase recorded in the areas surrounding AI data centres globally. We estimate that the land surface temperature increases by 2°C on average after the start of operations of an AI data centre, inducing local microclimate zones, which we call the data heat island effect. We assess the impact on the communities, quantifying that more than 340 million people could be affected by this temperature increase. Our results show that the data heat island effect could have a remarkable influence on communities and regional welfare in the future, hence becoming part of the conversation around environmentally sustainable AI worldwide.

arXiv.org

@lpryszcz Yep, even if you are energy-efficient at shedding heat, you are still shedding heat!

https://www.sltrib.com/news/environment/2026/05/07/utahs-data-center-could-create/

I think one of the things going on here is that the assumption is that 10x as big is "only" 10x as bad, but scales that large certainly have the possibility of qualitative changes that we might not have a good understanding of (and which we should not just take the developers' word on)

‘So much worse than I even thought’: Utah’s ‘hyperscale’ data center could create massive heat island near Great Salt Lake

Skeptics of the proposed hyperscale data center in Box Elder County are sweating about a lot more than its energy demands and potential toll on water supplies.

The Salt Lake Tribune

@ricci that is absurd.

I've used a a couple and toured a couple more of the US's largest supercomputer facilities, each of which manages to live in a single normal-sized building. These things run simulations of the universe. My stuff could take hours, maybe a day to run, but I know other stuff running there took weeks or months, on thousands of nodes. The facility I've worked with the most, NERSC, serves about 11k users for scientific research.

I struggle to imagine what you could possibly do with the scale of compute proposed at Stratos, even if it served the entire population of the US.

@iris a whole lot of surveillance capitalism, I guess
@ricci well all those numbers seem fucking crazy nuts.
@ricci Earnest question, Rob: if this were built over alfalfa fields, which would most likely use more water?
@ricci
Considering that gas turbines are around 40% efficient, that means actually 22.5GW of heat will be dumped into the environment.
A drying Great Salt Lake is spewing toxic dust. It could cost Utah billions.

A new report from two environmental groups says elected officials and scientists aren't taking the problem seriously enough.

Grist
@ricci Utah is already struggling for water, thanks for this info!!

@ricci interesting to me that the (presumably) higher-density facility is taller (multi-story). I've noticed that with other new-built high-density facilities.

to save ground space? maybe there is water cooling involved and it is helpful to have that equipment above/below servers? or high ceilings help with thermal engineering?

@bnewbold yeah I dunno! In this particular campus, that building seems to have consumed all remaining space on the lot, so it *could* just be an issue of the older ones not being as space constrained, but it also could be a fundamentally different design. My assumption (based only on trends, not any special knowledge) is that this new one also takes the cooling loop all the way to the chip - I don't know what that does with optimal layouts

@bnewbold @ricci

I would assume it provides more room for plumbing, power runs, data cabling, and air handling. And maybe catwalks with easy access to them without needing a ladder or lift.

Silicon chip plants are also taller than you might expect.

@ricci Weeeiiird! I had no idea this went up in my old neighborhood(-ish). I was a little confused when you said by the airport since there's ALSO a tiny Flexential facility on Campus View Drive (directly adjacent).

That SLC-01 building (along 9000 S) has been there for years; I puzzled over its ownership while driving by many times. The monstrosities...yeah, wow. I had no idea, thanks.

@jima they're also putting up an SLC 4 for a hyper scaler nearby, and there is supposed to be another campus going in on 7800 S just east of the river on a former steel mill site - it's not like we're short on datacenters here

Oh and the really big ones are down in Eagle Mountain