You know what’s funny? I use AI to develop software. However when I’m looking for libraries to do things if I see a CLAUDE.md file I have to look and see when it was added.

It’s like prewar steel.

It’s the difference between checking for questions in stack overflow and implementing solutions VS pasting every SO solution blindly until something works.

I do use autocomplete and ask plenty questions, sometimes even use an agent so it makes small changes that I then review and test, but I would never commit unchecked changes, and a claude.md implies that the AI is coding AND committing without supervision.

I can’t stress enough how different those scenarios are.

It’s not hypocritical. Because you use AI to code, you know how easy it is to just let the AI do it’s thing and not check it’s work. It’s almost like a sirens song. So you know the odds that a library that was coded with AI probably wasn’t checked by a human. That’s just called experience.
kittygram/CLAUDE.md at main

kittygram - A nitter-like frontend for instagram

Codeberg.org
I love this sabotage. Has this worked against anyone that you know of yet?
Interesting. I’m unfamiliar with the purpose of that file as originally intended, but if I understand it right, it might be used to detect PRs with ai-made code. Tell Claude to write a particular string in every PR, and then reject all PRs with that string.