The slow part of software is NOT the initial generation of software. It's the maintenance and review of it.
If your management is pushing for 10x programmer output, hell even 40% more programmer output, what they're asking for is a stability crisis. There's no way around it. That's how it is right now.
You can use these tools for red teaming (caveat: you will get a lot of false positives also). You can sort of use them for prototyping (though a lot of the value of understanding building through the prototyping process may be lost during that time; still, it is one place where things can increase). Those two categories don't create huge and unresolved copyright output questions in your codebase, and I think you can justify them.
But if you're using them to actually write the software itself, you're borrowing against the future, against stability, and against institutional understanding of your own stack.
Oh yeah the other caveat about using them for prototyping, as @tamzin highlights below, is that "quickly thrown together prototypes" often become production code, to their authors' dismay.
In many ways, whiteboard prototypes are much better, as they are immune from this problem.
@cwebber I've described previous attempts at work to Increase Velocity as strapping on rocket skates so we can careen headlong into a brick wall faster.
With AI codegen I think we've decided the rocket skates weren't fast enough or the brick wall big enough, and are going full Saturn-V-Into-The-Sun.
I'm sure it'll be fine though. Really. What could possibly go wrong?
@cwebber imho we are still lacking is a good taxonomy for maintenance.
Whilst "new code" can be easily measured by "lines of code" or through "new features" there is no metric for maintenance.
Because maintained code is a non-functional feature.
@d3sre did some amazing work on the other non-functional feature info-sec, to make the work of SOCs visible, see:
https://github.com/d3sre/IntelligentProcessLifecycle
Would you happen to know if anyone works on this?
@cwebber I posit that writing code itself is never the bottleneck (otherwise it would have been solved with cheap offshore programmers long ago).
The hard work making software is designing it, including from a user experience point of view, your business needs and operational constraints.
Using generative AI to add code you don’t understand (or worse, features you don’t know why you add them) will make all of these things cumulatively harder.

AI coding tools are optimising the wrong thing and nobody wants to hear it. Writing code was already fast. The bottleneck is everything else: unclear requirements, review queues, terrified deploy cultures, and an org chart that needs six meetings to decide what colour the button should be.
@cwebber It feels like nobody in a corp world gives a shit to think about - code reuse, code maintenance, code legacy, time spent on support - what they need is a feature generator machine without any logic
I wonder which type of swards Japanice smiths had if they allowed to change the technology each time the shōgun ordered a new feature for the war the plaid
@cwebber I was literally trying to put this into words today.
Thank you for writing it in so much of a better way than I ever could!
well, the mantra at work is "move fast and break things" but for years some peoples were able to make a decent job and slow down just a little bit to not break everything ...
Now the tide is too high, the CTO is pushing in production untested code, i have coworker that run scripts on their personal computer opening chrome login with their account and updating things.
Everyday now, slack is about broken pages, broken features, action made "'by mistake".
We are so doom.
@cwebber AWS too.