So there’s this guy on GitHub that’s sending hundreds of PRs to completely unrelated open source projects to fix typos and stuff and I’m sure he’s using AI or something because half of them are unnecessary or wrong and when someone reviews the PR to point it out he just closes it
There are no obvious tells and some of them do actually fix real (if often superficial problems) so he is being taken seriously by most of them, it’s just that half of them get merged and half are closed as per above because they’re wrong.
Except sudo, which has merged all 23 of his PRs without comment. I guess this means their code is either so bad that the LLM is finding all bunch of actual bugs, or that the maintainers aren’t reading his PRs closely. I’m not sure either makes me feel good about the project.
@saagar llms can spot a large variety of bugs that static analyzers don’t catch, yeah sometimes also hallucinate some. But combined with asan, clang analyzer and valgrind they are a really powerful solution for code review and bug catching. So I wouldn’t say sudo’s code is bad, but software engineering is so hard that it’s easy to find ways to improve the quality over time. C coding practices evolved and that’s a techdebt we are all used to face every day
@pancake I think it is possible to use AIs effectively but I don’t think you can send 150 PRs a month without there being problems somewhere
@saagar yeah i doubt that too, im also not happy with blind merging or review less prs