Someone now trying to argue that some of the most renowned software developers in the world think that LLM-generated code is "just fine".
@stevefenton I've seen quite a few developers displaying confirmation bias when we discuss sessions at the end. I ask how much of it they think the tool did. They remember it doing most of it, but I've just watched them correcting the model every other interaction, getting stuck and editing the code themselves many times. I also see it a lot in video tutorials.
The "I" in "AI" is *us*
LLMs are trained on code that I think is crap. Sure, I use code from Stack Overflow and other sources. But I refactor it to within an inch of its life before moving on. How could I expect any better from LLMs?