When robots are part of a process, their involvement MUST be identified.

Always identify LLM output ('got this from AI: ') and disclose images that were generated.

(updated this to remove the bit about code as it is less clear cut)

@hdv I'm conflicted on this one. On one hand, I think transparency is important. On the other hand, if someone chooses to use AI to assist them, I don't think the standards of accountability or code review should be any different. The person submitting it is ultimately still fully accountable and responsible for the patch, and the reviewer should apply the same rigor as they always do. I could see a world where attributing AI has the opposite effect: a (very) misguided reviewer assumes it is more likely to be correct *because* AI is involved. @pikesley
@jcsteh @[email protected] hmm… interesting I had not considered the opposite effect, you may be right. Maybe a description in commit message is more helpful in that case, where you could say 'please note I did some of this with'? But I guess that's still deviating responsibility away from the human that ought to be responsible anyway