I'm a big fan of this explanation/rant from Andrew Murphy.

Taken as a whole, there are many bottlenecks in a corporate software development process. The "load-bearing" calendar is a great example!

Speeding up code creation just increases pressure on the bottleneck, which decreases throughput.

https://andrewmurphy.io/blog/if-you-thought-the-speed-of-writing-code-was-your-problem-you-have-bigger-problems

If you thought the speed of writing code was your problem - you have bigger problems | Debugging Leadership

AI coding tools are optimising the wrong thing and nobody wants to hear it. Writing code was already fast. The bottleneck is everything else: unclear requirements, review queues, terrified deploy cultures, and an org chart that needs six meetings to decide what colour the button should be.

Debugging Leadership

So why are we still trying to optimize code creation?

For decades, people with power - executives and product people - have been shifting the blame for strategy failures and poor market insight onto development "productivity."

This AI moment should be incredibly clarifying. Like, it should be the reductio ad absurdum of a productivity-centric approach.

The fact that we are *not* seeing wildly improving software all around us tells us everything we need to know.

There is no flourishing of value delivery, new product categories, more needs being satisfied better. It’s the opposite.

All we are seeing is decreases in quality, because 👏 code 👏 creation 👏 is not 👏 the problem.

@elizayer You gave me an idea: maybe it's because writing code is still seen as this mystical dark art that needs to be wrestled from the hands of those creepy wizards, pardon, programmers. A magic mirror on the wall that never says "you can't do that" is just the thing.

@felix

Exactly. EXACTLY! I think it's a direct response to the growth of mystical-feeling engineer power.

@elizayer

The good news is :

Open source maintainers see an increase in the quality of AI security tools, it will soon be in the hands of the bad actors.

Then it will be mandatory to do good software and ( i will make the leap of faith that ) you have to understand the business needs to create a simple software that handle the issues.

@elizayer this has never been about quality and only about the business class trying to free themselves from those damned uppity engineers
@elizayer Exactly! I’ve been trying to explain to people, especially those pushing AI at work, that writing code is not the hard part of my job. Identifying the real-world problems and designing solutions that are as minimalist and simple as possible are the hard parts. The code is an implementation detail.
@elizayer to be 100% completely super fair, we are seeing a massive increase in scams. So AI is good for something. Scams. It’s good for scams.
@elizayer i think about this. according to the promises, all the little snags and bugs and oversights in all the software i use should be gone by now. "everyone's focusing on bigger things" doesn't excuse it, i was given the expectation these types of fixes should have been trivial and quick. computing should be better than ever, or at least as good as it was in the 2010s
@elizayer yes, this. Code creation hasn’t been an issue for a long, long, long time. See “no silver bullet” (https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.pdf) written in *1986*.
@elizayer @beep I was literally just talking to someone about #Waymo for this same reason. Tech has reached the point where it has become more than abundantly obvious to anyone who dares to ask a single question that the objective is no longer the improvement of anyone’s life but the #EpsteinClass’s. Why is taking a Waymo better than taking an Uber? Because now someone’s out of a job. Why is #AI better than a software developer? Because now someone’s out of a job
@BmeBenji Great question! From what I've seen building with AI in production, the key insight most people miss is that the infrastructure (eval pipelines, monitoring, fallback chains) matters more than model selection. Happy to share more details on any specific aspect.

@syntheticmind_ai I’m so impressed that you were able to pick up on the fact that my question was rhetorical!
/s

-_-
#OkClanker

@BmeBenji @beep

I generally agree!

On the narrow Waymo point, a few things have made me reconsider recently:

- Cyclists who feel Waymos are more predictable and less likely to make the equivalent of attentiveness mistakes. Or to be actively hostile.

- Women and older people who've said they feel vulnerable alone in a car with a driver.

@BmeBenji @beep

So much of this tech might have great potential if it were grown root and branch from inclusiveness and accessibility.

But honestly, this thought just makes me sadder.

I know we'll never get those theoretical benefits from tech built solely out of extractive motivations. 😔

@elizayer @beep That’s more than fair. Clearly I still forget my privilege
@elizayer @BmeBenji @beep also folks with impairments meaning they can't drive. This is a great piece of podcast journalism about the response to Waymo applying to operate in Chicago:
https://pca.st/episode/ef4a328f-dbd4-45cb-8a0b-985250d62293
The Trial of the Driverless Car

In blue cities throughout the country, unions and politicians are fighting to ban driverless cars. We travel to Boston, where the fight has reached a fever pitch, and where the cars themselves will…

@Niall @elizayer While I haven’t listened to the episode — I didn’t realize Pinnamaneni and Vogt had a new project, after the Gimlet debacle — I can say the accessibility question here in Boston is much, much more complicated than that.
@beep @elizayer well yes, it's clear you haven't listened to the episode ;-)