RE: https://mean.engineer/@indutny/116245283352156779

- Opens pull request with 19k added lines of code written with Claude Code.
- Claims he reviewed them all.

Even if that were true and even if he hadn't used any AI, I would shout that guy out of the room.

Pray that this PR doesn't get merged.

Edit: A PR this large is a nightmare, even when it doesn't contain LLM-generated slop. You want many small PRs that are 1) easy to understand/review and 2) easy to revert if it turns out they contained a bug. 3) git bisect, a tool to find the commit that introduced a bug, works best with many atomic commits. Git bisect becomes useless with a commit that big, since it can't narrow down the origin of the bug. The people who approved this atrocity are bad software engineers. Even calling them bad "engineers" is an insult to actual engineers.

#JavaScript #NodeJS #LLM

@davidculley Same. I read through the PR comments. It has multiple approvals and they have also decided that they are fine with AI coded contributions. 🤷🏻‍♂️ Basically, I expect it'll get merged.

@davidculley this does not really change your main point, but the PR has 120 commits in it. OTOH out of curiosity I went looking for sketchy code and found some within ~5 minutes, plus the tests that I looked at seemed pretty superficial, which I imagine is par for the course with vommits.

Between this and all the ecosystem attacks I'm kinda glad I never really got into node.js that much.

@tiotasram And when the PR gets merged, you get a single merge commit or those 120 commits are squashed into one single big commit. In both cases you end up with the problem I described.

Unless you go for a linear history by rebasing, which most projects don't do.

And git bisect works best when the commits are made intentionally, leaving the code in a working state at every commit. I don't know if that is the case with these perhaps randomly created commits.

@davidculley huh that's really silly git/github behavior. Indeed makes multiple smaller pull requests much better.
@tiotasram It's much easier to spot logic errors in ten or hundred lines rather than in 19,000 lines.

@tiotasram Read this article to understand why rebasing is the best of the three strategies to integrate a pull request.

https://www.sandofsky.com/git-workflow/

There's only two reasons why developers dislike/misunderstand this merge strategy (and one reason that prevents them from choosing this strategy):

- They are too lazy or incapable of creating a clean commit history. Due to their unwillingness/inability to create good atomic commits, they would litter the default branch with "WIP" commits if they followed this merge strategy.
- They like to see visually when a branch branched off of the default branch and merged back into it. This is an anti-pattern and would better be done with git tags.
- The single valid reason to not choose this strategy, in my opinion, is because your command to auto-delete merged branches no longer works. Rebasing changes the hashes of commits and thus git is no longer able to detect that a branch has been merged. This means you either need to delete your branches manually or end up over time with a long list of obsolete branches.

#git

Understanding the Git Workflow

If you don't understand the motivation behind Git's design, you're in for a world of hurt. With enough flags you can force Git to act the way you think it should instead of the way it wants to. But that's like using a screwdriver like a hammer; it gets the

Sandofsky

@davidculley fascinating, but kinda depressing. The article claims to explain how to use git in the intended/simple way, but involves a bunch of extra steps and flags.

A much simpler way of getting the best of both worlds would have been for git to force every checkin to include a WIP/working flag (perhaps defaulting to WIP) and to maybe allow git blame to blame merges more easily?

A more-granular commit history should *always* be better than a "cleaned-up" history since the cleanup throws away information that might be useful; IMO it's a failure of the tool design to force/prefer the cleaned-up history for certain use cases.

To be fair, I work in an indie/academic context, usually alone, where I've never had to use tools like blame or bisect.

@davidculley @indutny Hmm, I’ve spent the day doing the opposite with Claude Code – having it split apart a gigantic PR that arose from commit early commit often with a lack of focus – and it’s performing quite well doing that

It’s not necessarily the tool that’s the issue, it’s more how one uses it I think

@voxpelli @indutny A kitchen knife can be used both to provide food and to murder someone.

An atomic bomb, by contrast, has no defensible use cases. Unlike a kitchen knife, it's not neutral.

LLMs are like atomic bombs regarding their neutrality. They are too large to be trained by individuals, they can only be trained in corporate-owned data centers. They are a non-neutral tool designed to concentrate even more power in the hands of oligarchs.

The tool is the issue in the case of LLMs.

@davidculley The volume is the tell. 19k lines in a single PR means nobody is reviewing anything, regardless of how it was written. The PRs that cause the worst production incidents are always the large ones waved through because the author was senior or the deadline was real. AI makes the volume problem exponential. The answer isn't "review harder," it's shipping smaller so humans can actually understand what they're approving.