Dragan Stepanović

@d_stepanovic
401 Followers
99 Following
688 Posts
Trying hard not to think about small batches, bottlenecks, and systems. In the meantime: XP, Theory of Constraints, Lean, Systems Thinking
websitehttps://draganstepanovic.com

Preparing a talk "Agentic coding - Systems Perspective", and along the way realized that cognitive/comprehension debt didn't arrive with the advent of agentic coding. Most teams doing work in isolation (individually), already heavily experienced it.

The difference is that on this spectrum of fragmentation of the mental model of workings of the system we went from erosion to a complete dissolution of a shared mental model.

I also have new understanding of why teams doing pair/mob worked so well

Let's not forget that LLMs got fed with an ever decreasing quality of work our industry has been producing over the last 15 years as a result of cheap money with ever decreasing central bank interest rates.

And both of these reinforcing loops are accelerating with LLMs dogfooding.

So, here's the thing.

If this, someone would say, embarrassing level of availability came as a result of "90% of our code is written by agents, and we (mostly) don't review the code they generate", it's fine as long as the demand for your product is so disproportionately strong, that this level of (un)availability doesn't affect that demand to a level where customers would consider someone else instead of you.

1/n

I can guarantee you that addressing an actual bottleneck in the system definitely shouldn't look like pushing elephant-sized inventory through the boa constrictor (coding → review → integration → deployment → etc.).

And you can tell by the elephant rolling its eyes.

I feel this slide from Jez Humble's talks has become more relevant than ever.

It's not about generating bigger batches faster, but thinner slices validated sooner.

I was trying to find an image for the change in the dynamics that will likely happen as a result of plummeting of the cost of generating more code with AI, and "elephant travelling through a boa constrictor" nails it.

Essentially, the elephant just got bigger.

Oh, I have two more from Eli (Goldratt) on AI:
I'll just leave this here...
Sure, dude...

As long as LLM pricing is based on how much/how often you're allowed to converse with the model, Spec-Driven Development and big(ger) batches will be a normal response to that.

Scarcity drives batch transaction costs up, which drives the average batch size up to compensate for it.

NO SMALLER BATCHES WITHOUT HIGHER AVAILABILITY.

NO REDUCTION OF REWORK WITHOUT SMALLER BATCHES.

I really need a T-shirt with this one.