There's a lot of stuff going around about datacenters, so I decided to do a quick tour yesterday of some of the datacenters in the Salt Lake Valley. Some are indeed quite large, but there are a bunch of smaller ones too - and they are not always where you think!

All of these are publicly known, and you can find them (and ones in your own area) at https://www.datacentermap.com/ .

Let's start with a datacenter that I go by all the time! It's across the street from my grocery store in downtown #SLC. It's listed as a colocation facility; datacenters are famously secretive about who their tenants are, but we can guess that it probably hosts servers belonging to nearby businesses, especially ones that want their storage, etc. nearby, but don't want to have to maintain a secure, cooled room. Given the number of banks that have headquarters nearby, I'd bet at least some of them are customers.

This is a fairly little guy, with apparently 16k square feet of floorspace and 1.6MW of power.

Next, an even smaller datacenter, that just about anyone in #SLC has seen! This is XMission, a local Internet Service Provider that's been running since 1993, so one of The Ancients in Internet time. It's on a very busy part of 4th South, and if you've been by at night, you've seen the big LED display on the front of the building that they put various animations on.

One of the things that I *think* is probably in this building is SLIX: https://slix.net/traffic/ - this is an Internet Exchange Point (IXP), where various carriers meet up to exchange traffic without it having to travel long distances. These are often run as a sort of community infrastructure - it's in the best interests of all networks involved to connect to each other so that they can do their jobs more efficiently.

SLIX is fairly small (according to their own data they carry ~200Gbps, with some spikes up to 1Tbps). There are about 40 networks that meet there: https://slix.net/participants/ . Funny story, when I first got Google Fiber at my house, I was getting routed through California to get to the University of Utah campus just a few miles away. I pinged a guy I know who pinged a guy he knows who ... learned that some of the participants in SLIX didn't have their routes set up right. A config change later, and not only me, but basically everyone on any commercial ISP in the Salt Lake Valley had much more direct routes to campus!

This one is larger, physically (22.5 sq ft), than the first datacenter we looked it, but claims less power: 490kw. That's not a ton of power - my Chevy Bolt can draw 150kw from its batteries at max acceleration, and there are much bigger and sportier EVs that can draw something almost up in the range of this datacenter! (though only for brief periods of course! this datacenter probably draws a substantial fraction that much 24/7) Why is there so much less power for this datacenter?

Well, one of the key factors of datacenters is how power-dense they are: how much power they are designed to deliver to each rack, and how much heat they are capable of moving out.

Compute - especially GPU compute for AI - is incredibly power-dense and incredibly hot. So we can guess this datacenter is probably not for compute. If I had to guess, this is probably mainly intended as a "carrier hotel" - it's probably focused on having telecoms companies as tenants. I base this both on the lower power density, and where it is: it's near the Utah State Fairpark, which is in turn relatively close to train tracks heading both east-west and north-south. A lot of long-distance fiber in the US follows both the rail and Interstate road networks, because it's relatively straightforward to run fiber alongside transportation links. Salt Lake City lies on the west side of some of the few passes through the Rockies, so it has a ton of fiber, following I-80, the Union Pacific, etc. This is a good place for carrier hotels.

How is a carrier hotel different from an IXP? At an IXP, the carrier is just pulling in some fiber, maybe one or two routers or something. But they have a lot more equipment that they need than that - they have servers of various kinds too, plus the bigger backbone routers that fan out in many directions, etc. Mobile carriers have a fair amount of wired topology to deal with. That's the kind of stuff they put in carrier hotels, and this is a good spot for them.

I picked this shot because, in the background, you can see the Gadsby Power Plant, one of the main sources for power in #SLC. That's a natural gas plant that generates about 300MW. Put a pin in that number, we'll come back to it later.

Now we're getting a bit bigger, and also more residential. This one sits on the edge of a residential neighborhood, on 200 E, in Milcreek. This is a 36k sq ft, 1.9 MW facility. What's in there? I don't know, as mentioned above, datacenters don't tend to tell you who their tenants are. There's probably some reasonable computing power in there, but it's probably not dense enough to be very GPU-heavy.

The sounds of the HVAC systems were quite noticeable at this one. Any time you are dealing with electricity, you are also dealing with heat. In a datacenter, the power drawn by the compute and network equipment gets turned into heat, and you need to get rid of it. Of course, you want to spend as little electricity getting rid of heat as you can. Datacenters call this "Power Usage Effectiveness", commonly called PUE. A PUE of 1.5 means that for every KW that goes to computers, .5 KW goes to other stuff - mostly cooling, but also heat losses, lighting, etc. A 1.5 PUE is pretty good, supposedly some of the biggest datacenters have PUE of around 1.1 .

This actually highlights one way in which having a fairly large-scale datacenter is efficient: putting all the computers in one place does enable you to use cooling systems that get rid of more heat for less power. Of course, how many computers you have, where your power is coming from, what mechanisms you use to cool them, etc. matters too! Again, we'll get back to that later.

By the way, my guess would be that only the building in the front is a datacenter - the building in the back has too many truck bays and not enough cooling. It's probably a small warehouse of some sort.

@ricci How do you find out the size and power usage of a particular data center?

@mjd

Excellent question! For the mutlt-tenant datacenters, location, size, and power draw is generally advertised, because they are trying to attract customers. I pulled it from datacentermap.com, so, for example:

https://www.datacentermap.com/usa/utah/salt-lake-city/salt-lake-city-campus/

For the private ones, like the likely-AI building at this campus, you mostly have to get this information from press releases. So it's probably less reliable, as there is more incentive to overhype.

@ricci Thanks!

When I saw the claim about the 9GW data center, my immediate thought was that it was simply a lie, intended as an advertisement to potential investors: Look what amazing stuff we are going to do!

How plausible do you find the claim that they actually intend to build a 9GW data center that will take up 10% of the Salt Lake Valley?

@mjd Frankly, I think it's entirely implausible that it will get built as advertised. I'm not sure that the demand is actually there for as many datacenter projects as have been announced. I think it's a very good bet that many or even most of them won't get built out to the size they've discussed. I think the game here is to make big announcements to try to grab headlines and capital before someone else does, and before demand collapses. Is this one of the ones that might actually get built? No idea.

One likely pivot, if the datacenter doesn't get built, or gets built at a much smaller size, is that they switch to being a private power plant with a bunch of land where they don't have to follow state or county land-use regulations (this is what MIDA is for). That would likely mean bringing in other energy-intensive industries; they have more or less said this in county commission meetings. There's a chance that this is actually far worse, as datacenters (if they use low-water cooling) actually use less water and don't produce as much ground pollution as many industrial land uses.

@ricci @mjd Could this be so they don't have to go back for more permits later? (Someone thinking very long term about how much they might build someday?)

@skybrian @mjd yes, this is a possibility, and it's probably what they'd say if asked.

I'm not sure it's entirely credible though