Code generated from LLMs is going to need more testing than code written by developers. This seems self-evident to me, but I suspect a lot of people are going to learn it (or ignore it) the hard way.

Given that most existing codebases are not well tested, and most developers don't test, this does not bode well.

The practical consequence of using LLMs to generate code is that many developers will find they have unwittingly moved themselves into a role they were probably trying to avoid: they have automated the creation of legacy code and have redefined their job role as debugging and fixing such code.
@kevlin To me what is hard is defining the problem in complex systems. Not "I want to do X" but do X while saving Y for use in Z in such a way that it doesn't break B. Maybe not a real good analogy but being able to define all the parameters in a complex system is what makes me a good dev. I have never been able to lay out a complete problem, up front, in a way I could feel it into AI. And the side problems is where the debugging is
@katana0823 @kevlin But what if you had some kind of formal language with a constrained grammar to specify your problem, then maybe the AI would ... oh. wait ...