So Anthropic employees are using Claude Code to contribute AI-generated code to open source repositories and hiding the fact using their own internal “undercover mode”.

Totally trustworthy people.

(Any open source project that at the very least requires disclosure of AI-authored contributions should immediately ban Anthropic employees on principle.)

#AI #Anthropic #ClaudeCode #subterfuge

@aral Honestly I don't actually hate this.

It's a tool. The _user_ is responsible for what they're submitting. It's putting code generated by them in their name. I think this is actually good.

@aredridel @aral I really can’t agree with this, because it’s a question of accurate labeling not of “responsibility” or “authorship”. co-authored-by is perhaps the wrong method for labeling such things, but consider raw milk. ultimately, it is indeed the producer’s responsibility to ensure their product is free of contamination. but disclosure of its method of production is explicitly the kind of requirement that allows consumers of said product to make safe choices

@glyph Yeah, I disagree. Code isn't ingredients and it's not “contamination" any more than you should label “I used search and replace on this”

What you want to know is whether it was well engineered or not.

And in fact, this is almost entirely orthogonal to "safety”. This is an engineering product. The safety comes from processes and whether or not _anyone checked the work done was right_, not the inputs.

@aredridel @glyph I find myself wedged between these two positions.

To me, it is good that authorship (and responsibility therefrom) is staying with a human, but it is bad that Anthropic are going out of their way to prevent disclosure that code is not human in origin (because it is a useful measure of "well-engineered", among other things)

@SnoopJ Yeah, not sure it's a good measure of that at all.

And also people are only caring _because_ it's LLM-generated, not because it's unsafe.

(An awful lot of LLM generated code is dreck, but an awful lot of code is dreck in general.)

But yeah, the special case of hiding that authorship is just ... ew.

@aredridel it would have been better if I'd said it's one thing I'm paying attention to, I concede that it is by itself not a good measure

But if someone used a language model, I already know that they couldn't be bothered for some of it, and that is useful signal to me.

@aredridel to say that another way: the class of mistakes I am looking for is different if I know that a language model was involved, because I know it will make mistakes that a human being never would

@aredridel I guess it's kind of a moot point though, sustained contribution by someone who is relying on a language model will definitely surface that the tool is present

and once I know a person has used the tool *once*, I assume from that point forward that they are using it for *everything*, even if this isn't the case. if it's disclosed up-front, it damages my trust in the person a LOT less. but I guess people are going to vary on how caustic they find this to trust