Yeah @jonny's thread is great, really eye-opening.
It's an interesting question. There are a few different arguments that advocates for using these tools make.
skilled software engineers are very good at using imperfect tools -- figuring out the scenarios they work well in and how to work around the problems. @mttaggart's article was a great example of how this can work in practice, and @glyph has some thoughtful posts along these lines (not that either of them are advocates of the tools, but they illustrate the point). Static analysis tools (my software engineering claim to fame) is a great example of this general tendency: they can be extremely useful despite high numbers of false positives and false negatives.
the tools will radically democratize who can create personal-use software -- stuiff that that addresses their own (and their friends/family's) problems without being intended for broader use. For a lot of secnerios, attributes like scalability / reliability / security don't necessarily matter that much; so being able to start with a natural language definition and get something "good enough" can potentially be useful.
agentic software development is a transformative approach that leverages today's immense computing power so can produce software at least as good as today's hand-crafted software (which to be fair mostly sucks) far more quickly.
Then again as well as the issues that excellent article @rysiek discusses, advocates in general don't consider Gender HCI, Feminist HCI, Post-Colonial Computing, Anti-Oppressive Design, Design Justice, Accessibility, Security, Algorithmic Discrimination, or Design from the Margins into account. Neither do the people creating these tools, and neither does the overwhelming majoriity of the existing software these tools have been trained on. So software generated by these tools is at besting going to replicate the existing problems in these areas -- and more likely magnify them.
So this to me is where the bullet points above break down.
Few if any software developers are "skilled" in all of these areas, so don't know how to compensate for imperfect tools (and quite possibly aren't even aware of the tools imperfections).
"Personal use" tools that aren't accessible or designed from the margins, or embed algorithmic discrimination, aren't useful for most people.
Generating more software more quickly that magnifies (or even reproduces) today's problems in all these areas magnifies oppressions.
And as you say there's also the the data stealing, exploitation, environmental racism, etc, of the current generation of tools -- and let's not forget fascism, eugenics, and cognitive issues!
In theory there are alternate approaches that can avoid these problems; @anildash has talked about using small models trained locally on his own code, and that seems like a potentially-promising direction. In practice though the vast majority of advocates today seem to be using stuff from Anthropic, OpenAI, Meta ... even the ones who acknowledge the ethical issues don't actually address them.
@timnitGebru