Code generated from LLMs is going to need more testing than code written by developers. This seems self-evident to me, but I suspect a lot of people are going to learn it (or ignore it) the hard way.

Given that most existing codebases are not well tested, and most developers don't test, this does not bode well.

The practical consequence of using LLMs to generate code is that many developers will find they have unwittingly moved themselves into a role they were probably trying to avoid: they have automated the creation of legacy code and have redefined their job role as debugging and fixing such code.
@kevlin To me what is hard is defining the problem in complex systems. Not "I want to do X" but do X while saving Y for use in Z in such a way that it doesn't break B. Maybe not a real good analogy but being able to define all the parameters in a complex system is what makes me a good dev. I have never been able to lay out a complete problem, up front, in a way I could feel it into AI. And the side problems is where the debugging is
@katana0823 @kevlin Somewhat ironically, non-AI systems have been great at this for decades (e.g. RDBMS query planners) and as an industry we have done almost nothing to pursue the potential in that area. I think this is because we don't agree on a good cost measurement for each subsystem and/or approach so aggregation of costs has seemed beyond reach. LLMs are showing the value of ignoring some minor level of error in order to see some valuable aggregate.