Passing thought on the Claude code leak, and how messy and wasteful the code apparently is (as many of us suspected):

There’s been a lot of focus on the energy & environmental costs of running LLMs. There are also energy & environmental costs to deploying LLM-generated code.

Similarly, there’s a lot of focus on the kickbacks cloud vendors who fund AI get from LLMs renting their servers. There may be a similar kickback from computationally inefficient LLM-generated software renting their servers.

1/

There’s a lot of software out there where either (1) the users are captive users, or (2) actual outcomes don’t matter, and the important thing is to check the box, to have officially pretended to build the thing.

That’s the sort of software where development costs are especially painful for the MBAs, and where pushing the frontiers of the “fast build, low quality” quadrant for may be a killer market — even if it’s just fast and not so cheap.

2/

Highly optimized code is of course extremely expensive to develop — both to build and maintain. But now even just-average-performance code is the more costly alternative.

And sure, grinding out a vibe-coded LLM horror so you can check that business box may leave you with code that’s not only especially buggy but RAM-hungry and CPU-hungry. And sure that’s expensive to deploy. But hey: your cloud spend is already preposterous, right? And high deployment costs are more predictable than high development costs…right?

3/

I can easily imagine scenarios where huge swaths of guts-of-the-business software increase their resource needs in production by an order of magnitude, •and• where businesses are perfectly happy to pay that cost.

That’s a story with horrifying environmental costs.

4/

A whole lot of our present moment boils down to resource cost externalities — externalities that in many society has intentionally created.

We made carbon emission and water use way too cheap, and we pay dearly for it every day.

/end