Did it ever occur to you why your computer has a fan, and why that fan usually stays quiet until the machine actually starts doing something?

When your laptop sits idle, very little is happening electrically. Modern processors are extremely aggressive about not working unless they have to. Large parts of the chip are clock-gated or power-gated entirely. No clock edges means no switching. No switching means almost no dynamic power use. At idle, a modern CPU is mostly just maintaining state, sipping energy to keep memory alive and respond to interrupts.

The moment real work starts, that changes.

Every clock tick forces millions or billions of transistors to switch, charge and discharge tiny capacitors, and move electrons through resistive paths. That switching energy turns directly into heat. More clock cycles per second means more switching. More switching means more heat. Clock equals work, and work equals heat.

This is why performance and temperature rise together. When you compile code, render video, or train a model, the clock ramps up, voltage often increases, and the chip suddenly dissipates tens or hundreds of watts instead of one or two watts.

The fan turns on not because the computer is panicking, but because physics is being obeyed.

Even when transistors are not switching, hot silicon still consumes power. As temperature increases, leakage currents increase exponentially. Electrons start slipping through transistors that are supposed to be off. This leakage does no useful work. It simply generates more heat, which increases temperature further, which increases leakage again. This feedback loop is one of the reasons temperature limits exist at all, and ultimately why we have fans – to keep the system under load below this critical temperature.

Above roughly 100 C, this leakage becomes a serious design concern for modern chips. Not because silicon melts (that's above 1400ºC) or stops working, but because efficiency collapses.

You spend more and more energy just keeping the circuit alive, not computing. To compensate, designers must lower clock speeds, increase timing margins, or raise voltage, all of which reduce performance per watt.

Reliability also suffers. High temperature accelerates wear mechanisms inside the chip. Metal atoms in interconnects slowly migrate. Insulating layers degrade. Transistors age faster. A chip running hot all the time will not live as long as one kept cooler, even if it technically functions.

This is why cooling exists, and why it scales with workload: It exists to keep the chip in a temperature range where switching dominates over leakage, where clocks can run fast without excessive voltage, and where the hardware will still be alive years from now.

In space, where you cannot rely on air or liquid to carry heat away, this tradeoff becomes unavoidable and very visible.

  • Run hotter, and you can radiate heat more easily.
  • Run hotter, and your electronics become slower, leakier, and shorter-lived.

On Earth, live and electronics get to pretend the universe is gentle: We sit under a magnetic cocoon.

The Earths magnetic field bends and corrals a lot of charged particles that would otherwise slam into the atmosphere and the ground. The polar lights are indicative of particles which hit the upper atmosphere and dump energy there instead of into your laptop or your DNA.

Low Earth orbit is still inside much of that protective bottle. It is not deep space: most of the time you are still inside the magnetosphere, but you pass through regions where trapped particles dip closer to Earth. The South Atlantic Anomaly is the famous example, a patch where satellites see a much higher rate of hits. Operators notice because sensors glitch, memory errors spike, and instruments get noisy.

Go higher and the protection changes. The Van Allen belts are zones of trapped particles shaped by the magnetic field. They sit above typical LEO altitudes and below geostationary orbit.

Geostationary orbit is far outside LEO, and you spend much more time in harsher particle populations and different dose conditions.

Radiation matters because a modern chip is a giant field of tiny, delicate transistors storing tiny amounts of charge. A single energetic particle can change how or even if that works.

  • Bit flips: A particle passes through silicon, leaves a trail of charge, and a memory cell or latch interprets that as a 0 becoming a 1. That is a single event upset. It does not break the chip, it just corrupts state. The usual defense is error detection and correction, ECC in memory, parity, scrubbing, retries, and lots of "trust but verify" in data paths. That defense costs area, power, and latency. You carry more bits than you asked for, you spend cycles checking them, and you sometimes redo work.
  • Latchup and destructive events: Some particle strikes can trigger parasitic structures in CMOS so a section of the chip effectively shorts power to ground. If you are lucky, the system detects it and power cycles that block. If you are unlucky, you get local overheating and permanent damage. The defense here is design techniques, guard rings, current limiting, fast power cutoffs, and redundancy. Redundancy costs density. You either duplicate blocks so you can route around a failed one, or you accept that some percentage of silicon will be lost over mission life and you overprovision from day one.
  • Total ionizing dose: Ionizing radiation gradually traps charge in insulating layers and at interfaces. Threshold voltages shift, leakage rises, timing changes, noise margins shrink, and eventually the chip that used to pass validation at room temperature starts failing at its corners. This is why space hardware often talks about dose ratings and mission lifetimes, not just "it works today." The defenses are process choices, device layout choices, and again, guardbands.

How does space hardware survive?

Shielding and Redundant Design.

  • Put material around the electronics, often aluminum as a structural and shielding compromise. It helps, but mass is the tax you pay forever. It also only helps a bit: High energy particles penetrate a lot of material, and some shielding configurations create secondary particles when the primary hits the shield. You can reduce dose and upset rates, you cannot build a perfect bunker without turning your satellite into a brick.
  • A lot of rad hard parts use larger transistors, older process nodes, thicker oxides, and conservative voltages. Larger devices store more charge, so a stray deposit is less likely to flip a bit. Thicker insulators tolerate more ionization. Conservative voltages and clocks give you more timing margin as the device ages. All of this makes the chip slower and bigger for the same function.
  • On top of that, Logic level hardening, software paranoia. ECC everywhere. Voters and triplication for critical state, triple modular redundancy where you do the same computation three times and take the majority result. Watchdogs, reset domains, isolation boundaries, and constant self checking. You get correctness, but you spend transistors and joules on distrust.

Every one of these defenses hits the same three budgets.

Area goes up because you replicate circuits and add check bits and voters.

Power goes up because more circuitry toggles, and because robust designs often run at higher voltages than bleeding edge consumer silicon.

Speed goes down because checking takes time, retries take time, and wide safety margins force lower clocks.

Space hardened electronics is built for survival, not speed. It is very reliable, and slow as fuck.

If we hold all that against a H100 GPU as is being used for AI, we can see that this is a lost cause without launching any satellite.

The H100s GH100 die is about 814 mm2 on TSMC 4N with about 80 billion transistors.

This kind of die does not fly into space and survives longer than an afternoon.

https://indico.esa.int/event/165/contributions/1218/attachments/1205/1425/05b_-_KIPSAT_-_Presentation.pdf

ESA talks about 65nm processes, 16 times larger structures, 250 times larger area structures, for their space hardened compute.

The same amount of transistors for a H100 would be a 0.2 m2 slab, which also means that energy goes up and clock goes down, down, down.

Some compute ins pace runs on 28 nm structures, so a 50x times area increase compared to a H100. That's 40.000 mm2 for the same amount of transistors.

Or, in other words, a GPU does not work in space.

At all.

If you try, 80 bn transistors become around 25 bn transistors for overhead (n = 3) and redundant reserve.

So even if you sent a 4 nm node chip into space, it's no longer an 80 bn monster, but only a relatively modest 25 bn transistor GPU fragment.

Running hotter cools better, to the fourth power. So even relatively modest temperature increases will pay of big time in terms of radiative cooling (100ºC -> 120ºC), but that means more leakage, more power, less clock. So your 25 bn transistors net capacity will calculate slower than on earth.

A GPU does not work in space.

We can now talk about bandwidth, latency and spectrum carrying capacity, because we also want to TALK to those data centers from earth, but that's kind of a moot point already.

We can then talk about launch costs, kilograms, lifetime, and the atmostpheric effects of that material on re-entry, but it will never come to that.

@isotopp I don't understand why the peddler of such absurdity get any air time on any platform (except maybe a comedy club; or maybe as a mental health issue).

Any remotely professional journalistic fact check would weed these stories out as completely bonkers.

@larsmb @isotopp That's why they are buying these platforms (and turn them into comedy clubs).
@isotopp it is just a story to allow moving X to SpaceX, which is too important to fail.
Until it isn’t.
@isotopp This gets even worse with GPU memory. H100s have internal remapping tables for defect memory cells and these even keel over with an ionosphere between the local fusion reactor and the cards.
@isotopp and you've proven again that disproving a ruse such as "we put AI into space" takes longer than a Tech Bro to come up with that shit.

@anton @isotopp

While that is true, @isotopp's postings have also been way more interesting reading than anything I've read about the original proposal to put datacenters in space.

@isotopp The really interesting question is: How do you prevent an AI in space from killing the on-site space datacenter maintenance people because of contradictory commands. Seriously, I think the radiation problem may be solvable by shielding, The thermals will be the main problem. A singe GPU maybe. At scale the components may be dying of heat as fast as of radiation.
@isotopp If you only do it because of solar energy it's probably easier and cheaper just to install more solar cells and wind turbine around a datacenter to tap to the space-energy ;) . You may not even need that much battery storage, you switch the workload to a datacenter with enough of such energy and thus the cheapest energy. Workloads are much more mobile than the usual consumer. Essentially the workload would follow the sun. Without being in space.
@isotopp Much less radiation, less cooling problems, no problems with murderous AIs in space.
@isotopp Don't be so negative. Sometime in the near future magic will happen.
We seen this happening with the nuclear waste problem, fusion reactors and combustion engine efficiency. It only took a few decades to not find a solution.
@isotopp To put it into perspective: to get rid of the 500W thermal design power of an Intel Xeon 6962P just by radiation, the 104.5mm x 70.5mm sized package would need a temperature of 773°C. If you like to operate at 100°C, the power limit is about 8W (given the same package size).

@cgudrian @isotopp

That’s if you put the chip into space, without any thermal management.

You would design an active systems that transports heat away from the chip into some liquid, perhaps boost the temperature with a heatpump and transport it to a large radiator.
If I’m right, about > ½ m² for 500 W.

@isotopp Handy Fun Fact, it takes about a square kilometer of radiator panels to dissipate a gigawatt of heat, and a gigawatt of space solar panels needs about 4 square kilometers. Even if they loft a bunch of relatively small satellites that's a *lot* of mass and complexity for what's frankly a techbro fever dream.
@isotopp "In space, where you cannot rely on air or liquid to carry heat away, this tradeoff becomes unavoidable and very visible." yes! The the most often used reason I hear why to put a dc in the orbit is: its damn cold...No its not when sun is shining hard on you and even if not: The reason thermos flasks are good at holding things hot is the vacuum between the inner and outer wall. And vakuum is something you will find plenty out there...😉
@isotopp Heat dissipation issues alone would fill its very own thread 😏

The good @nyrath has a very nice webpage on that topic: https://projectrho.com/public_html/rocket/heatrad.php

@thilo @isotopp

Heat Radiators - Atomic Rockets

@wonka @nyrath @thilo @isotopp Considering how much radiator panel even our primitive International Space Station uses, it occurs to me that virtually all classic depictions of space stations and other large constructs with huge volume to surface ratios are unrealistic. The Death Star should melt just from having the crewed sections have lighting.
@60sRefugee @wonka @nyrath @thilo @isotopp there was a great article on ecumenopoli I read a while back about how having a city that large turns into a heat dissipation problem.
Coruscant, Heat Dissipation, and Basic Worldbuilding

Continuing along the lines of world-building for science fiction settings, I had planned a post on just how much heat Coruscant (of Star Wars universe fame) would generate and the fact that the pla…

M. Q. Allen

@cinebox @60sRefugee @wonka @nyrath @thilo @isotopp

I do love fandom nitpicking.

I'll add a few things in case someone is interested:

I believe the "city the size of planet" trope was originally introduced by Asimov in The Foundation. It was later used by at least Harrison in in the
"Bill, The Galactic Hero" -series. In Star Wars, Coruscant was invented by Timothy Zahn for Thrawn -series.

To certain extend these should all be seen as the "same place" (capital of a galactic polity). ½

@cinebox @60sRefugee @wonka @nyrath @thilo @isotopp

2/2

I recall Harrison addressing the population by pointing that there're completely abandoned ruins in his capital (something akin to Detroit). So while the capital had high population, it wasn't evenly distributed. There were also buried seas etc.

In the books, Coruscant had somewhat of a morlock-problem, and the sublevels are outside Republic-control. The polities didn't even've maps! So the "1..3 trillion" was just the known subjects.

@cinebox @60sRefugee @wonka @nyrath @thilo @isotopp

I'll add a personal assumption to the "each planet has only one type of fauna" -problem that Star Wars has.

I believe that when space travel is as cheap as it seems to be in SW, then there's no need to populate the more harsh climates of the planet (or they were outside the galactic polity control). So for example Tatooine's Mos Eisley is probably closer to one of the poles, and Alderaan's poles were used just for cold storage.

@iju @60sRefugee @wonka @nyrath @thilo @isotopp you know, “not evenly distributed” reminds me that I don’t think Coruscant is ever portrayed outside the sight-lines of the senate building or jedi temple. Maybe it genuinely is mostly ruins

@cinebox @60sRefugee @wonka @nyrath @thilo @isotopp

The last season of Clone Wars (which ends in a very epic manner!) starts with an arch where Ahsoka spends time in the lower levels, finding out that the Jedi are not well-thought of. Clone Wars also has shorter periods where Anakin (or someone else) visits the lower levels for one reason or another.

But in general, Coruscant was created for the books, so its existence outside senate and the temple tends to not be very visually interesting.

@iju @60sRefugee @wonka @nyrath @thilo @isotopp of course theyre the same place, Star Wars is just Foundation fanfic :P

Now that is some first grade rage bait! 😉

@cinebox @iju @60sRefugee @nyrath @thilo @isotopp

@60sRefugee @wonka @nyrath @thilo @isotopp you know the original Death Star managed to get all the heat out through a little hole and you know how that worked out for them in the end -- reminds me of a Dorling Kindersley about 𝑆𝑡𝑎𝑟 𝑊𝑎𝑟𝑠 spacecraft that had detailed cutout drawings and not a single fuel tank!
@UP8 @60sRefugee @wonka @nyrath @thilo @isotopp Yeah, as long as you don't mind dumping physical material (some sort of "exhaust port") you can load that up with heat and dump it. In *principle* you could also dump heat radiatively in large amounts using a laser or similar, but I don't think it's been done in practice. Laser cooling does exist, but on a different basis for very small systems.

@_thegeoff

The exhaust keeps things under control, but every now and then you get some heat build up and have to destroy a nearby planet

@_thegeoff @60sRefugee @wonka @nyrath @thilo @isotopp when i run numbers for space-based cooling systems I am not bothered by the required area of the cooling fins because if your operating temperatures are close to temperatures on Earth the scale of the radiator is about the same as the scale of your solar panels...
@_thegeoff @60sRefugee @wonka @nyrath @thilo @isotopp ... 𝐁𝐔𝐓 with thin film and membrane construction solar panels could be even lighter weight than they are now whereas you need some kind of cooling loop that doesn't get stuck in zero gravity and won't freeze up if something goes wrong; i can see why heat pipes are so popular in that business and they do OK in terms of power density but not as good as state-of-the-art or future solar panels ...
@_thegeoff @60sRefugee @wonka @nyrath @thilo @isotopp ... I think one way or another weight kills any plan for off-planet data centers even if some kind of Starship-class vehicle gets perfected; there is also the "wireless chauvinism" that makes people all too easily not see the huge advantage a data center with fiber running all over the place has over "modern" and "clean" wireless systems that Apple fanbois would approve of.
@60sRefugee @wonka @nyrath @thilo @isotopp @cinebox I’d assume the power source that can explode a planet with a laser has an equally magical heat deletion mechanic.
@robert_cassidy @60sRefugee @wonka @nyrath @thilo @isotopp @cinebox It's the same system. They just pump excess heat into Other People's Planets until the problem goes away.
@robert_cassidy @wonka @nyrath @thilo @isotopp @cinebox See Nyrath's Boom Table for how much energy it takes to explode a planet like Alderaan and then calculate the _mass_ of that much energy. Like the equivalent of cubic kilometers of matter and anti-matter.
Useful Tables - Atomic Rockets

@nyrath @robert_cassidy @wonka @thilo @isotopp @cinebox So much energy in fact that if we're not to take Star Wars as pure science fantasy we'd need to postulate that the Death Star's main weapon is actually some sort of conversion beam, rather than supplying all the energy needed.
@60sRefugee @wonka @nyrath @thilo @isotopp IIRC the deathstar was not full of people space, mostly a star destroyer worth of decking wrapped around a giant reactor and beam weapon.