Code is a liability (not an asset). Tech bosses don't understand this. They think AI is great because it produces 10,000 times more code but that means it's producing 10,000 more liabilities. AI is the asbestos we're shoveling into the walls of our high-tech society:

https://pluralistic.net/2025/09/27/econopocalypse/#subprime-intelligence

--

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

1/

Code is a liability. Code's *capabilities* are assets. The goal of a tech shop is to have code whose capabilities generate more revenue than the costs associated with keeping that code running. For a long time, firms have nurtured a false belief that code costs less to run over time: after an initial shakedown period in which the bugs in the code are found and addressed, code ceases to need meaningful maintenance.

2/

After all, code is a machine without moving parts - it does not wear out; it doesn't even wear down.

This is the thesis of Paul Mason's 2015 book *Postcapitalism*, a book that has aged remarkably poorly (though not, perhaps, as poorly as Mason's own political credibility): code is not an infinitely reproducible machine that requires no labor inputs to operate.

3/

Rather, it is a brittle machine that requires increasingly heroic measures to keep it in good working order, and which eventually does "wear out" (in the sense of needing a top-to-bottom refactoring).

To understand why code is a liability, you have to understand the difference between "writing code" and "software engineering."

4/

"Writing code" is an incredibly useful, fun, and engrossing pastime. It involves breaking down complex tasks into discrete steps that are so precisely described that a computer can reliably perform them, and optimising that performance by finding clever ways of minimizing the demands the code puts on the computer's resources, such as RAM and processor cycles.

5/

Meanwhile, "software engineering" is a discipline that subsumes "writing code," but with a focus on the long-term operations of the *system* the code is part of. Software engineering concerns itself with the upstream processes that generate the data the system receives. It concerns itself with the downstream processes that the system emits processed information to.

6/

It concerns itself with the adjacent systems that are receiving data from the same upstream processes and/or emitting data to the same downstream processes the system is emitting to.

"Writing code" is about making code that *runs well*. "Software engineering" is about making code that *fails well*.

7/

It's about making code that is legible - that can be understood by third parties asked to maintain it, or who might be asked to adapt the processes downstream, upstream or adjacent to the system to keep the it from breaking. It's about making code that can be adapted, for example, when the underlying computer architecture it runs on is retired and has to be replaced, either with a new kind of computer, or with an emulated version of the old computer:

https://www.theregister.com/2026/01/05/hpux_end_of_life/

8/

The last supported version of HP-UX is no more

: Remember when HP made its own CPUs and Unix? We wonder if it does

The Register

Because that's the thing: any nontrivial code has to interact with the outside world, and the outside world isn't static, it's *dynamic*. The outside world busts through the assumptions made by software authors *all the time* and every time it does, the software needs to be fixed. Remember Y2K? That was a day when perfectly functional code, running on perfectly functional hardware, would stop functioning - not because the code changed, but because *time marched on*.

9/

We're 12 years away from the Y2038 problem, when 32-bit flavors of Unix will all cease to work, because they, too, will have run out of computable seconds. These computers haven't changed, their software hasn't changed, but the world - by dint of ticking over, a second at a time, for 68 years - will wear through their seams, and they will rupture:

https://www.theregister.com/2025/08/23/the_unix_epochalypse_might_be/

10/

The Unix Epochalypse might be sooner than you think

: Museum boffins find code that crashes in 2037

The Register

The existence of "the world" is an inescapable factor that wears out software and requires it to be rebuilt, often at enormous expense. The longer code is in operation, the more likely it is that it will encounter "the world." Take the code that devices use to report on their physical location. Originally, this was used for things like billing - determining which carrier or provider's network you were using and whether you were roaming.

11/

Then, our mobile devices used it to help determine your location in order to give you turn-by-turn directions. Then, this code was repurposed again to help us find our lost devices. This, in turn, became a way to locate *stolen* devices, a use-case that sharply diverges from finding lost devices in important ways - for example, when locating a lost device, you don't have to contend with the possibility that a malicious actor has disabled the "find my lost device" facility.

12/

These additional use cases - upstream, downstream and adjacent - exposed bugs in the code that never surfaced in the earlier apps. For example, all location services have some kind of default behavior in the (very common) event that they're not really sure where they are. Maybe they have a general fix - for example, they know which cellular mast they're connected to or they know where they were the *last* time they got an accurate location fix - or maybe they're totally lost.

13/

It turns out that in many cases, location apps drew a circle around all the places they *could* be and then set their location to the middle of that circle. That's fine if the circle is only a few feet in diameter, or if the app quickly replaces this approximation with a more precise location. But what if the location is miles and miles across, and the location fix *never* improves?

14/

What if the location for any IP address without a defined location is given as *the center of the continental USA* and any app that doesn't know where it is reports that it is in a house in Kansas, sending dozens of furious (occasionally armed) strangers to that house, insisting that the owners are in possession of their stolen phones and tablets?

https://theweek.com/articles/624040/how-internet-mapping-glitch-turned-kansas-farm-into-digital-hell

You don't just have to fix this bug once - you have to fix it over and over again.

15/

How an internet mapping glitch turned this Kansas farm into digital hell

For a decade, the owners of a Kansas farm have been inundated with accusations that they are online scammers and identity thieves

The Week

In Georgia:

https://www.jezebel.com/why-lost-phones-keep-pointing-at-this-atlanta-couples-h-1793854491

In Texas:

https://abc7chicago.com/post/find-my-iphone-apple-error-strangers-at-texas-familys-home-scott-schuster/13096627/

And in my town of Burbank, where Google's location-sharing service once told us that our then-11-year-old daughter (whose phone we couldn't reach) was 12 miles away, on a freeway ramp in an unincorporated area of LA county (she was at a nearby park, but out of range, and the app estimated her location as the center of the region it has last fixed her in) (it was a rough couple hours).

16/

Why lost phones keep pointing at this Atlanta couple's home

Since 2007, Jezebel has been the Internet's most treasured source for everything celebrities, sex, and politics...with teeth.

Jezebel

The underlying code - the code that uses some once-harmless default to fudge unknown locations - needs to be updated *constantly*, because the upstream, downstream and adjacent processes connected to it are changing *constantly*. The longer that code sits there, the more superannuated its original behaviors become, and the more baroque, crufty and obfuscated the patches layered atop of it become.

17/

Code is not an asset - it's a liability. The longer a computer system has been running, the more tech debt it represents. The more important the system is, the harder it is to bring down and completely redo. Instead, new layers of code are slathered atop of it, and wherever the layers of code meet, there are fissures in which these systems behave in ways that don't exactly match up.

18/

Worse still: when two companies are merged, their seamed, fissured IT systems are smashed together, so that now there are *adjacent* sources of tech debt, as well as upstream and downstream cracks:

https://pluralistic.net/2024/06/28/dealer-management-software/#antonin-scalia-stole-your-car

19/

Pluralistic: The reason you can’t buy a car is the same reason that your health insurer let hackers dox you (28 Jun 2024) – Pluralistic: Daily links from Cory Doctorow

That's why giant companies are so susceptible to ransomware attacks - they're full of incompatible systems that have been coaxed into a facsimile of compatibility with various forms of digital silly putty, string and baling wire. They are not watertight and they cannot be made watertight.

20/

Even if they're not taken down by hackers, they sometimes just fall over and can't be stood back up again - like when Southwest Airlines' computers crashed for all of Christmas week 2022, stranding millions of travelers:

https://pluralistic.net/2023/01/16/for-petes-sake/#unfair-and-deceptive

Airlines are especially bad, because they computerized early, and can't ever shut down the old computers to replace them with new ones.

21/

Pluralistic: 1,000,000 stranded Southwest passengers deserved better from Pete Buttigieg (16 Jan 2023) – Pluralistic: Daily links from Cory Doctorow

This is why their apps are such dogshit - and why it's so awful that they've fired their customer service personnel and require fliers to use the apps for *everything*, even though the apps do. not. work. These apps won't ever work.

The reason that British Airways' app displays "An unknown error has occurred" 40-80% of the time isn't (just) that they fired all their IT staff and outsourced to low bidders overseas.

22/

It's that, sure - but also that BA's first computers ran on electromechanical valves, and everything since has to be backwards-compatible with a system that one of Alan Turing's proteges gnawed out of a whole log with his very own front teeth. Code is a liability, not an asset (BA's new app is years behind schedule).

23/

Code is a liability. The servers for the Bloomberg terminals that turned Michael Bloomberg into a billionaire run on RISC chips, meaning that the company is locked into using a dwindling number of specialist hardware and data-center vendors, paying specialized programmers, and building brittle chains of code to connect these RISC systems to their less exotic equivalents in the world. Code isn't an asset.

24/

AI can write code, but AI can't do software engineering. Software engineering is all about thinking through *context* - what will come before this system? What will come after it? What will sit alongside of it? How will the world change? Software engineering requires a very wide "context window," the thing that AI does not, and cannot have.

25/

AI has a very narrow and shallow context window, and linear expansions to AI's context window requires *geometric* expansions in the amount of computational resources the AI consumes:

https://pluralistic.net/2025/10/29/worker-frightening-machines/#robots-stole-your-jerb-kinda

Writing code that works, without consideration of how it will fail, is a recipe for catastrophe. It is a way to create tech debt at scale. It is shoveling asbestos into the walls of our technological society.

26/

Pluralistic: When AI prophecy fails (29 Oct 2025) – Pluralistic: Daily links from Cory Doctorow

Bosses *do not know* that code is a liability, not an asset. That's why they won't shut the fuck up about the chatbots that shit out 10,000 times more code than any human programmer. They think they've found a machine that produces *assets* at 10,000 times the rate of a human programmer. They haven't. They've found a machine that produces *liability* at 10,000 times the rate of any human programmer.

27/

Maintainability isn't just a matter of hard-won experience teaching you where the pitfalls are. It also requires the cultivation of "Fingerspitzengefühl" - the "fingertip feeling" that lets you make reasonable guesses about where never before seen pitfalls might emerge. It's a form of process knowledge. It is ineluctable. It is not latent in even the largest corpus of code that you could use as training data:

https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance

28/

Pluralistic: Fingerspitzengefühl (08 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

*Boy* do tech bosses not get this. Take Microsoft. Their big bet right now is on "agentic AI." They think that if they install spyware on your computer that captures every keystroke, every communication, every screen you see and sends it to Microsoft's cloud and give a menagerie of chatbots access to it, that you'll be able to tell your computer, "Book me a train to Cardiff and find that hotel Cory mentioned last year and book me a room there" and it will do it.

29/

This is an incredibly unworkable idea. No chatbot is remotely capable of doing all these things, something that Microsoft freely stipulates. Rather than doing this with one chatbot, Microsoft proposes to break this down among dozens of chatbots, each of which Microsoft hopes to bring up to 95% reliability.

That's an utterly implausible chatbot standard in and of itself, but consider this: probabilities are *multiplicative*.

30/

A system containing two processes that operate at 95% reliability has a net reliability of 90.25% (0.95 * 0.95). Break a task down among a couple dozen 95% accurate bots and the chance that this task will be accomplished correctly rounds to *zero*.

Worse, Microsoft is on record as saying that they will grant the Trump administration secret access to all the data in its cloud:

https://www.forbes.com/sites/emmawoollacott/2025/07/22/microsoft-cant-keep-eu-data-safe-from-us-authorities/

31/

Microsoft Can't Keep EU Data Safe From US Authorities

Microsoft has admitted to a French senate hearing that it can't protect EU data from U.S. government snooping.

Forbes

So - as Signal's Meredith Whittaker and Udbhav Tiwari put it in their incredible 39C3 talk last week in Hamburg - Microsoft is about to abolish the very *idea* of privacy for *any* data on personal and corporate computers, in order to ship AI agents that cannot *ever* work:

https://www.youtube.com/watch?v=0ANECpNdt-4

32/

39C3 - AI Agent, AI Spy

YouTube

Meanwhile, a Microsoft exec got into trouble last December when he posted to Linkedin announcing his intention to have AI rewrite *all* of Microsoft's code. Refactoring Microsoft's codebase makes lots of sense. Microsoft - like British Airways and other legacy firms - has lots of very old code that represents unsustainable tech debt. But using AI to rewrite that code is a way to *start* with tech debt that will only accumulate as time goes by:

https://www.windowslatest.com/2025/12/24/microsoft-denies-rewriting-windows-11-using-ai-after-an-employees-one-engineer-one-month-one-million-code-post-on-linkedin-causes-outrage/

33/

Microsoft denies rewriting Windows 11 using AI after an employee's "one engineer, one month, one million code" post on LinkedIn causes outrage

Microsoft told Windows Latest that the company does not plan to rewrite Windows 11 using AI in Rust after an employee's post causes outrage.

Windows Latest

Now, some of you reading this have heard software engineers extolling the incredible value of using a chatbot to write code for them. Some of you *are* software engineers who have found chatbots incredibly useful in writing code for you. This is a common AI paradox: why do some people who use AI find it really helpful, while others loathe it? Is it that the people who don't like AI are "bad at AI?" Is it that the AI fans are lazy and don't care about the quality of their work?

34/

There's doubtless some of both going on, but even if you teach everyone to be an AI expert, and cull everyone who doesn't take pride in their work out of the sample, the paradox will still remain. The true solution to the AI paradox lies in automation theory, and the concept of "centaurs" and "reverse centaurs":

https://pluralistic.net/2025/09/11/vulgar-thatcherism/#there-is-an-alternative

35/

Pluralistic: Reverse centaurs are the answer to the AI paradox (11 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

@pluralistic > Software engineering requires a very wide "context window," the thing that AI does not, and cannot have.

The type of work I'm doing now (data engineering for a large organization) is full of the sort of software development you describe here. Whatever code I write has to cope with data coming in, from multiple not-mutually-friendly parts of the company, and it has to at least try to produce consistently comprehensible (and reasonably updateable) data for downstream users or processors. A huge part of even beginning to make that possible is understanding, in detail, what those parts of the company actually want or need. That's generally the most challenging part of my day job. The code is the easy part, and being able to puke out more code in less time rarely, if ever, solves the hard parts.

@dpnash @pluralistic good thing humans are equipped with infinite capabilities!

@pluralistic Hi, regarding Bloomberg Terminals and RISC chips: Do you have a source on that? Because I'd like to learn more details that are besides the point of your thread/blogpost.

(RISC is a processor design methodology that arose in the 1980s and is arguably still alive. Today's smartphones run on RISC chips too, but different specific designs)

@pluralistic
*SIGH* and then there are the dependency failures: when open source maintainers retire!
@pluralistic
“Baroque, crufty, and obfuscated” is my new punk string trio.

@pluralistic

The mapping software the company that I work for uses has a related issue. If it can't find an address, it will sometimes erroneously route to closest navigable point to the geographic center of the postal code it was given. This is a problem, especially for commercial vehicles that may be restricted in the routes they can travel.

@bruce @pluralistic
Not even the center point of the complete postcode, but often just the center of the first half, which can cover a pretty wide area.