When robots are part of a process, their involvement MUST be identified.
Always identify LLM output ('got this from AI: ') and disclose images that were generated.
(updated this to remove the bit about code as it is less clear cut)
When robots are part of a process, their involvement MUST be identified.
Always identify LLM output ('got this from AI: ') and disclose images that were generated.
(updated this to remove the bit about code as it is less clear cut)
@hdv @jaffathecake @jcsteh @pikesley
FWIW as someone who reviews a lot of Firefox code (and is a bit unimpressed by AI coding), FF developers tend to disclose when they're using AI if they're not quite sure about what they're submitting.
That said, there's tons of bad human-written patches too, so the standards are pretty similar: If I don't understand the code you're sending me I'll request changes (be it "explain why this is the right approach" or "document stuff better" or...).
@hdv @jaffathecake @jcsteh @pikesley
The only real risk IMO is being overwhelmed with tons of crappy AI patches, but so far people have been reasonable with their usage.
The more common pattern I see is a coworker uploading an AI-written patch as work in progress and saying "Before wasting your time, I tried to fix X with Claude and it came up with this approach, not sure it's on the right track, can you take a look?", and that's honestly... fine? I would've asked directly but... :)