Code generated from LLMs is going to need more testing than code written by developers. This seems self-evident to me, but I suspect a lot of people are going to learn it (or ignore it) the hard way.

Given that most existing codebases are not well tested, and most developers don't test, this does not bode well.

The practical consequence of using LLMs to generate code is that many developers will find they have unwittingly moved themselves into a role they were probably trying to avoid: they have automated the creation of legacy code and have redefined their job role as debugging and fixing such code.
@kevlin @pvaneynd I think this will not end well. Coding is a skill that must be maintained. The more you us an AI to generate the code, the more your coding skills will deterioate and the harder it will be for you to evaluate if the code produced actually does what it is supposed to. That is (as noted by John Siracusa) really hard, even if it is code you have written yourself. There is a reason that programmers spend a of time debugging their code.