I’m not 100% sure what to make of this, but I definitely thought it was interesting.

“Technical debt becomes a wise investment”

https://www.danshapiro.com/blog/2025/12/this-is-a-time-of-technical-deflation/

This is a Time of Technical Deflation – Dan Shapiro's Blog

@mattiem I’m glad you shared this, it was very thought provoking. I see two issues though:

1. The LLMs are making the debt, not reducing it. Talk to someone who has a big project written entirely by LLMs, it often has 10x more LoCs than it feels like it should have. If they’re expecting future LLMs to be able to solve this, that’s is exactly what debt is. “I’ll totally be able to pay this back later.”

@mattiem

2. “You will get to pay them back with cheap AI hours tomorrow”: citation needed here, lol. Tokens are currently massively subsidized and it’s not clear what happens when we have to pay full rack rate. The model companies are betting that tokens get cheaper faster than their subsidies run out, but that remains to be seen!

@Soroush hmm yes. And I think an interesting dimension is how this affects conversations around funding work on tech debt, especially as software profitability get squeezed.
@mattiem I wish I had a sense for how much tech companies are spending on tokens and how they think about what a good amount to spend is
@Soroush I think that in the end, whatever it is, their spend on humans is more
@mattiem that might not stay true if token prices 10x or 20x, though. I don’t think that’s going to happen? But it is possible I would say
@Soroush you think tokens at 20x would cost more than human labor? I would be extremely surprised by this, but I’ll admit that I don’t know.
@mattiem no, 20x more than they currently cost. Put another way, between the chip companies, the data centers, the cloud companies, the ai labs, and the actual end user product companies, how big of a subsidy are we looking at?
@Soroush @mattiem open weights models are pretty cheap to run. They’re not as smart as the bleeding edge (Opus 4.6 codex-5.3 etc), but they’re not THAT far behind. Amd you can run them on hardware that costs a few thousands if you don’t want to rely on a cloud provider (and these cloud providers running open weights models for inference only are profitable). The tech is more accessible than we thought 2 years ago