@hanshuebner I was wondering, when somebody started comparing LLMs to the One Ring
@hanshuebner it sure is tempting to go from reviewing every single line to just give it a glance and accept a change out of optimisim. quite the slippery slope...

@mknoszlig I have some projects where I don't look at the code at all. It depends on the context and your development style whether you require human review or not.

Review is not a panacea, and that is true for humans and LLMs generating code. If you accept large changes from humans, chances are that your reviews are not effective and additional quality measures like more rigorous and specification-based testing are needed. I've worked in teams with no reviews, no tests, and neither, too.

@hanshuebner absolutely, for me the difference is that when i review (or write for that matter) code, i'm accepting at least some responsibility for that code. The context may be such that it's totally fine to have absolutely horrible code that you would never dare show anybody, or it may require extensive testing and validation etc. either way, that's on me.

LLMs can't be responsibile for their output. I think that's very important to maintain.

@mknoszlig The question of responsibility is a good point, of course. In my last job, one of our developers was motivated to make giant progress coding up a complex UI with an LLM. It worked very well to a large extent, but then some insane bug was found.

The dev was pretty depressed when the boss lashed out at him. After all, he was asked to do it this way and applauded for the quick progress.

Shared ownership is difficult, but with LLM code, there really can be no blaming of individuals.

@hanshuebner i'm not one for blaming individuals on a team even if there's no LLMs involved. but i think there's still a difference between being (or feeling) responsible and "it's Xs fault". anyways, i haven't thought this through to the end yet, obviously :)

@hanshuebner I think you made your points very clear in there. And I agree with your viewpoints. I hate most LLMs myself and witness large piles of shit coming out of them. But I also witnessed short prompts giving me tools that would have taken me days to write myself.

And then again the hilarious smart asses chipping in ... 🙈