Code generated from LLMs is going to need more testing than code written by developers. This seems self-evident to me, but I suspect a lot of people are going to learn it (or ignore it) the hard way.

Given that most existing codebases are not well tested, and most developers don't test, this does not bode well.

The practical consequence of using LLMs to generate code is that many developers will find they have unwittingly moved themselves into a role they were probably trying to avoid: they have automated the creation of legacy code and have redefined their job role as debugging and fixing such code.
@kevlin @dreid When chatgpt got publicly big and folks were searching for use cases, code generation abilities were praised. Some folks were praising using such tools to generate unit tests which clearly they saw as low value and easy. of course to me tests seemed the last thing you’d want to generate because we’re very bad at reviewing code for correctness! So we’ll have autogenerated legacy code with very misleading and incorrect tests. Yay.