"Writing code" is an incredibly useful, fun, and engrossing pastime. It involves breaking down complex tasks into discrete steps that are so precisely described that a computer can reliably perform them, and optimising that performance by finding clever ways of minimizing the demands the code puts on the computer's resources, such as RAM and processor cycles.

5/

Meanwhile, "software engineering" is a discipline that subsumes "writing code," but with a focus on the long-term operations of the *system* the code is part of. Software engineering concerns itself with the upstream processes that generate the data the system receives. It concerns itself with the downstream processes that the system emits processed information to.

6/

It concerns itself with the adjacent systems that are receiving data from the same upstream processes and/or emitting data to the same downstream processes the system is emitting to.

"Writing code" is about making code that *runs well*. "Software engineering" is about making code that *fails well*.

7/

It's about making code that is legible - that can be understood by third parties asked to maintain it, or who might be asked to adapt the processes downstream, upstream or adjacent to the system to keep the it from breaking. It's about making code that can be adapted, for example, when the underlying computer architecture it runs on is retired and has to be replaced, either with a new kind of computer, or with an emulated version of the old computer:

https://www.theregister.com/2026/01/05/hpux_end_of_life/

8/

The last supported version of HP-UX is no more

: Remember when HP made its own CPUs and Unix? We wonder if it does

The Register

Because that's the thing: any nontrivial code has to interact with the outside world, and the outside world isn't static, it's *dynamic*. The outside world busts through the assumptions made by software authors *all the time* and every time it does, the software needs to be fixed. Remember Y2K? That was a day when perfectly functional code, running on perfectly functional hardware, would stop functioning - not because the code changed, but because *time marched on*.

9/

We're 12 years away from the Y2038 problem, when 32-bit flavors of Unix will all cease to work, because they, too, will have run out of computable seconds. These computers haven't changed, their software hasn't changed, but the world - by dint of ticking over, a second at a time, for 68 years - will wear through their seams, and they will rupture:

https://www.theregister.com/2025/08/23/the_unix_epochalypse_might_be/

10/

The Unix Epochalypse might be sooner than you think

: Museum boffins find code that crashes in 2037

The Register

The existence of "the world" is an inescapable factor that wears out software and requires it to be rebuilt, often at enormous expense. The longer code is in operation, the more likely it is that it will encounter "the world." Take the code that devices use to report on their physical location. Originally, this was used for things like billing - determining which carrier or provider's network you were using and whether you were roaming.

11/

Then, our mobile devices used it to help determine your location in order to give you turn-by-turn directions. Then, this code was repurposed again to help us find our lost devices. This, in turn, became a way to locate *stolen* devices, a use-case that sharply diverges from finding lost devices in important ways - for example, when locating a lost device, you don't have to contend with the possibility that a malicious actor has disabled the "find my lost device" facility.

12/

These additional use cases - upstream, downstream and adjacent - exposed bugs in the code that never surfaced in the earlier apps. For example, all location services have some kind of default behavior in the (very common) event that they're not really sure where they are. Maybe they have a general fix - for example, they know which cellular mast they're connected to or they know where they were the *last* time they got an accurate location fix - or maybe they're totally lost.

13/

It turns out that in many cases, location apps drew a circle around all the places they *could* be and then set their location to the middle of that circle. That's fine if the circle is only a few feet in diameter, or if the app quickly replaces this approximation with a more precise location. But what if the location is miles and miles across, and the location fix *never* improves?

14/

What if the location for any IP address without a defined location is given as *the center of the continental USA* and any app that doesn't know where it is reports that it is in a house in Kansas, sending dozens of furious (occasionally armed) strangers to that house, insisting that the owners are in possession of their stolen phones and tablets?

https://theweek.com/articles/624040/how-internet-mapping-glitch-turned-kansas-farm-into-digital-hell

You don't just have to fix this bug once - you have to fix it over and over again.

15/

How an internet mapping glitch turned this Kansas farm into digital hell

For a decade, the owners of a Kansas farm have been inundated with accusations that they are online scammers and identity thieves

The Week

In Georgia:

https://www.jezebel.com/why-lost-phones-keep-pointing-at-this-atlanta-couples-h-1793854491

In Texas:

https://abc7chicago.com/post/find-my-iphone-apple-error-strangers-at-texas-familys-home-scott-schuster/13096627/

And in my town of Burbank, where Google's location-sharing service once told us that our then-11-year-old daughter (whose phone we couldn't reach) was 12 miles away, on a freeway ramp in an unincorporated area of LA county (she was at a nearby park, but out of range, and the app estimated her location as the center of the region it has last fixed her in) (it was a rough couple hours).

16/

Why lost phones keep pointing at this Atlanta couple's home

Since 2007, Jezebel has been the Internet's most treasured source for everything celebrities, sex, and politics...with teeth.

Jezebel

The underlying code - the code that uses some once-harmless default to fudge unknown locations - needs to be updated *constantly*, because the upstream, downstream and adjacent processes connected to it are changing *constantly*. The longer that code sits there, the more superannuated its original behaviors become, and the more baroque, crufty and obfuscated the patches layered atop of it become.

17/

Code is not an asset - it's a liability. The longer a computer system has been running, the more tech debt it represents. The more important the system is, the harder it is to bring down and completely redo. Instead, new layers of code are slathered atop of it, and wherever the layers of code meet, there are fissures in which these systems behave in ways that don't exactly match up.

18/

Worse still: when two companies are merged, their seamed, fissured IT systems are smashed together, so that now there are *adjacent* sources of tech debt, as well as upstream and downstream cracks:

https://pluralistic.net/2024/06/28/dealer-management-software/#antonin-scalia-stole-your-car

19/

Pluralistic: The reason you can’t buy a car is the same reason that your health insurer let hackers dox you (28 Jun 2024) – Pluralistic: Daily links from Cory Doctorow

That's why giant companies are so susceptible to ransomware attacks - they're full of incompatible systems that have been coaxed into a facsimile of compatibility with various forms of digital silly putty, string and baling wire. They are not watertight and they cannot be made watertight.

20/

Even if they're not taken down by hackers, they sometimes just fall over and can't be stood back up again - like when Southwest Airlines' computers crashed for all of Christmas week 2022, stranding millions of travelers:

https://pluralistic.net/2023/01/16/for-petes-sake/#unfair-and-deceptive

Airlines are especially bad, because they computerized early, and can't ever shut down the old computers to replace them with new ones.

21/

Pluralistic: 1,000,000 stranded Southwest passengers deserved better from Pete Buttigieg (16 Jan 2023) – Pluralistic: Daily links from Cory Doctorow

This is why their apps are such dogshit - and why it's so awful that they've fired their customer service personnel and require fliers to use the apps for *everything*, even though the apps do. not. work. These apps won't ever work.

The reason that British Airways' app displays "An unknown error has occurred" 40-80% of the time isn't (just) that they fired all their IT staff and outsourced to low bidders overseas.

22/

It's that, sure - but also that BA's first computers ran on electromechanical valves, and everything since has to be backwards-compatible with a system that one of Alan Turing's proteges gnawed out of a whole log with his very own front teeth. Code is a liability, not an asset (BA's new app is years behind schedule).

23/

Code is a liability. The servers for the Bloomberg terminals that turned Michael Bloomberg into a billionaire run on RISC chips, meaning that the company is locked into using a dwindling number of specialist hardware and data-center vendors, paying specialized programmers, and building brittle chains of code to connect these RISC systems to their less exotic equivalents in the world. Code isn't an asset.

24/

AI can write code, but AI can't do software engineering. Software engineering is all about thinking through *context* - what will come before this system? What will come after it? What will sit alongside of it? How will the world change? Software engineering requires a very wide "context window," the thing that AI does not, and cannot have.

25/

AI has a very narrow and shallow context window, and linear expansions to AI's context window requires *geometric* expansions in the amount of computational resources the AI consumes:

https://pluralistic.net/2025/10/29/worker-frightening-machines/#robots-stole-your-jerb-kinda

Writing code that works, without consideration of how it will fail, is a recipe for catastrophe. It is a way to create tech debt at scale. It is shoveling asbestos into the walls of our technological society.

26/

Pluralistic: When AI prophecy fails (29 Oct 2025) – Pluralistic: Daily links from Cory Doctorow

Bosses *do not know* that code is a liability, not an asset. That's why they won't shut the fuck up about the chatbots that shit out 10,000 times more code than any human programmer. They think they've found a machine that produces *assets* at 10,000 times the rate of a human programmer. They haven't. They've found a machine that produces *liability* at 10,000 times the rate of any human programmer.

27/

Maintainability isn't just a matter of hard-won experience teaching you where the pitfalls are. It also requires the cultivation of "Fingerspitzengefühl" - the "fingertip feeling" that lets you make reasonable guesses about where never before seen pitfalls might emerge. It's a form of process knowledge. It is ineluctable. It is not latent in even the largest corpus of code that you could use as training data:

https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance

28/

Pluralistic: Fingerspitzengefühl (08 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

*Boy* do tech bosses not get this. Take Microsoft. Their big bet right now is on "agentic AI." They think that if they install spyware on your computer that captures every keystroke, every communication, every screen you see and sends it to Microsoft's cloud and give a menagerie of chatbots access to it, that you'll be able to tell your computer, "Book me a train to Cardiff and find that hotel Cory mentioned last year and book me a room there" and it will do it.

29/

This is an incredibly unworkable idea. No chatbot is remotely capable of doing all these things, something that Microsoft freely stipulates. Rather than doing this with one chatbot, Microsoft proposes to break this down among dozens of chatbots, each of which Microsoft hopes to bring up to 95% reliability.

That's an utterly implausible chatbot standard in and of itself, but consider this: probabilities are *multiplicative*.

30/

A system containing two processes that operate at 95% reliability has a net reliability of 90.25% (0.95 * 0.95). Break a task down among a couple dozen 95% accurate bots and the chance that this task will be accomplished correctly rounds to *zero*.

Worse, Microsoft is on record as saying that they will grant the Trump administration secret access to all the data in its cloud:

https://www.forbes.com/sites/emmawoollacott/2025/07/22/microsoft-cant-keep-eu-data-safe-from-us-authorities/

31/

Microsoft Can't Keep EU Data Safe From US Authorities

Microsoft has admitted to a French senate hearing that it can't protect EU data from U.S. government snooping.

Forbes

So - as Signal's Meredith Whittaker and Udbhav Tiwari put it in their incredible 39C3 talk last week in Hamburg - Microsoft is about to abolish the very *idea* of privacy for *any* data on personal and corporate computers, in order to ship AI agents that cannot *ever* work:

https://www.youtube.com/watch?v=0ANECpNdt-4

32/

39C3 - AI Agent, AI Spy

YouTube

Meanwhile, a Microsoft exec got into trouble last December when he posted to Linkedin announcing his intention to have AI rewrite *all* of Microsoft's code. Refactoring Microsoft's codebase makes lots of sense. Microsoft - like British Airways and other legacy firms - has lots of very old code that represents unsustainable tech debt. But using AI to rewrite that code is a way to *start* with tech debt that will only accumulate as time goes by:

https://www.windowslatest.com/2025/12/24/microsoft-denies-rewriting-windows-11-using-ai-after-an-employees-one-engineer-one-month-one-million-code-post-on-linkedin-causes-outrage/

33/

Microsoft denies rewriting Windows 11 using AI after an employee's "one engineer, one month, one million code" post on LinkedIn causes outrage

Microsoft told Windows Latest that the company does not plan to rewrite Windows 11 using AI in Rust after an employee's post causes outrage.

Windows Latest

Now, some of you reading this have heard software engineers extolling the incredible value of using a chatbot to write code for them. Some of you *are* software engineers who have found chatbots incredibly useful in writing code for you. This is a common AI paradox: why do some people who use AI find it really helpful, while others loathe it? Is it that the people who don't like AI are "bad at AI?" Is it that the AI fans are lazy and don't care about the quality of their work?

34/

There's doubtless some of both going on, but even if you teach everyone to be an AI expert, and cull everyone who doesn't take pride in their work out of the sample, the paradox will still remain. The true solution to the AI paradox lies in automation theory, and the concept of "centaurs" and "reverse centaurs":

https://pluralistic.net/2025/09/11/vulgar-thatcherism/#there-is-an-alternative

35/

Pluralistic: Reverse centaurs are the answer to the AI paradox (11 Sep 2025) – Pluralistic: Daily links from Cory Doctorow

In automation theory, a "centaur" is a person, assisted by a machine. A "reverse centaur" is a person conscripted to *assist a machine*. If you're a software engineer who uses AI to write routine code that you have the time and experience to validate, deploying your Fingerspitzengefühl and process knowledge to ensure that it's fit for purpose, it's easy to see why you might find using AI (when you choose to, in ways you choose to, at a pace you choose to go at) to be useful.

36/

But if you're a software engineer who's been ordered to produce code at 10x, or 100x, or 10,000x your previous rate, and the only way to do that is via AI, and there is no human way that you could possibly review that code and ensure that it will not break on first contact with the world, you'll hate it (you'll hate it even more if you've been turned into the AI's accountability sink, personally on the hook for the AI's mistakes):

https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war

37/

Pluralistic: AI turns Amazon coders into Amazon warehouse workers (27 May 2025) – Pluralistic: Daily links from Cory Doctorow

There's another way in which software engineers find AI-generated code to be incredibly helpful: when that code is *isolated*. If you're doing a single project - say, converting one batch of files to another format, just once - you don't have to worry about downstream, upstream or adjacent processes. There aren't any. You're writing code to do something once, without interacting with any other systems.

38/

A *lot* of coding is this kind of utility project. It's tedious, thankless, and ripe for automation. Lots of personal projects fall into this bucket, and of course, by definition, a personal project is a centaur project. No one forces you to use AI in a personal project - it's always your choice how and when you make personal use of any tool.

39/

But the fact that software engineers can sometimes make their work better with AI doesn't invalidate the fact that code is a liability, not an asset, and that AI code represents liability production at scale.

In the story of technological unemployment, there's the idea that new technology creates new jobs even as it makes old ones obsolete: for every blacksmith put out of work by the automobile, there's a job waiting as a mechanic.

40/

In the years since the AI bubble began inflating, we've heard lots of versions of this: AI would create jobs for "prompt engineers" - or even create jobs that we can't imagine, because they won't exist until AI has changed the world beyond recognition.

I wouldn't bank on getting work in a fanciful trade that literally can't be imagined because our consciousnesses haven't so altered by AI that they've acquired the capacity to conceptualize of these new modes of work.

41/

But if you *are* looking for a job that AI will definitely create, by the millions, I have a suggestion: digital asbestos removal.

For if AI code - written at 10,000 times the speed of any human coder, designed to work well, but not to fail gracefully - is the digital asbestos we're filling our walls with, then our descendants will spend generations digging that asbestos out of the walls.

42/

There will be plenty of work fixing the things that we broke thanks to the most dangerous AI psychosis of all - the hallucinatory belief that "writing code" is the same thing as "software engineering." At the rate we're going, we'll have full employment for generations of asbestos removers.

43/

I'm coming to Colorado! Catch me in #Denver on Jan 22 at The Tattered Cover:

https://www.eventbrite.com/e/cory-doctorow-live-at-tattered-cover-colfax-tickets-1976644174937

And in #ColoradoSprings from Jan 23-25, where I'm the Guest of Honor at COSine:

https://www.firstfridayfandom.org/cosine/

Then I'll be in #Ottawa on Jan 28 at Perfect Books:

https://www.instagram.com/p/DS2nGiHiNUh/

And in #Toronto with Tim Wu on Jan 30:

https://nowtoronto.com/event/cory-doctorow-and-tim-wu-enshittification-and-extraction/

44/

Cory Doctorow Live at Tattered Cover Colfax

Join us for a great event with Cory Doctorow as we hear about his bestselling book: Enshittification.

Eventbrite
File:HAL9000.svg - Wikimedia Commons

@pluralistic
Well written and informative, thank you.