A conversation with a coworker re-triggered an intrusive thought that I find myself returning to regularly while working in a firm in the grips of AI influences:

Teams and engineering processes are like fish in tanks. There's a careful balance of the nitrogen cycle that keeps delicate organics alive; above a certain pH, it's just not plausible to believe things will keep working. But to understand effects, we have to take into account causes and add the effect of time.

This leads to an appreciation of the toxicity of short-term incentives.

There's a reason I think very, *very* poorly of managers that lean on date-driven delivery: they are consistently externalising costs in ways that they *can and should* appreciate. That takes the form of high-interest unstructured loans against future product and team capacity.

But far too many engineering leaders assume *ceteris paribus* ("all else equal") will hold.

That's not how the fishtank works.

If you dump in a lot of food or chemicals to achieve short-term results, you *might* be able to juice things in the short run for your fish, but you also buy the consequences of a dynamically unstable system under stress.

So when managers assume that they'll "increase productivity" by adding a machine that generates more code, without taking into account the intertemporal effects of *owning* more code (of lowest-common-denominator quality), they're replicating the KLOC fallacy on steroids.

Owning code requires understanding, and one way we keep our fish tanks habitable is to swim; to do the work of moving code and replicating the mental exertion that keeps our *fingerspitzengefühl* tuned.

Replacing that, or pushing it out of balance, creates a different set of intertemporal effects that can quite easily push the system into crisis.

*Ceteris paribus* about AI, as with frameworks, is wish-thinking; our engineering cultures are what we re-make them to be every day, in every way.