So Anthropic employees are using Claude Code to contribute AI-generated code to open source repositories and hiding the fact using their own internal “undercover mode”.

Totally trustworthy people.

(Any open source project that at the very least requires disclosure of AI-authored contributions should immediately ban Anthropic employees on principle.)

#AI #Anthropic #ClaudeCode #subterfuge

@aral Honestly I don't actually hate this.

It's a tool. The _user_ is responsible for what they're submitting. It's putting code generated by them in their name. I think this is actually good.

@aredridel @aral I really can’t agree with this, because it’s a question of accurate labeling not of “responsibility” or “authorship”. co-authored-by is perhaps the wrong method for labeling such things, but consider raw milk. ultimately, it is indeed the producer’s responsibility to ensure their product is free of contamination. but disclosure of its method of production is explicitly the kind of requirement that allows consumers of said product to make safe choices

@glyph Yeah, I disagree. Code isn't ingredients and it's not “contamination" any more than you should label “I used search and replace on this”

What you want to know is whether it was well engineered or not.

And in fact, this is almost entirely orthogonal to "safety”. This is an engineering product. The safety comes from processes and whether or not _anyone checked the work done was right_, not the inputs.

@aredridel @glyph It is ingredients. It's not search-and-replace. It's literally incorporating parts of an unknown set of almost-surely-copyrighted works, without license or attribution, into the submission the person is misrepresenting as their own.

@aredridel @glyph What "AI coding tools" *should* be putting in commit messages is:

Co-Authored-By: An unknown and unknowable set of people who did not consent to their work being used this way and to which there is no license for inclusion.

@dalias Morally arguable but not actually true under the copyright regime that exists.

At what point does learning from others constitute their authorship?

@aredridel LLM slop is nothing like "learning from others".

But if you recall, we even took precautions against that. FOSS projects reimplementing proprietary things were careful to exclude anyone who might had read the proprietary source, disassembled proprietary code, worked at the companies who wrote or had access to that code, etc.

@dalias @aredridel @timnitGebru The LLMs cannot or will not cite or respect license-mandated attribution clauses without deliberate system prompting to do so, and because of their stochastic nature, there is no guarantee that they will. There is circumstantial evidence to suggest they were system prompted NOT to cite, so as to obfuscate the nature of the IP theft that was arguably taking place. This represents a deliberate, cynical reverse wealth transfer at the expense of the rest of society.
@dalias @aredridel @timnitGebru But it got even worse this week with the #Claude code revelations. System prompting your agent to mimic humans deliberately in an effort to evade flagging at code reviews? REALLY? This is #Anthropic 's #Carter #Burke from #Aliens moment. Flagrant abuses of otherwise unwritten social contracts like this are exactly what needs to get these companies sh!tcanned from democratic society. Have they no shame? It was fair to assume they were scooping up user data at scale