I think some large open source advocacy organization should try to make an (optional) ethics code for AI use in open-source, so projects could advertise that they use AI in some restricted way (or that they don't use AI at all) so people can have some kind of transparency regarding AI use in open source.
I propose the following transparency guidelines:
1. Label every commit at least partly written by AI.
2. Disclose the model used and if it was selfhosted or from a 3rd party service.
3. Disclose any conflicts of interest (if any).
(post edited because I discovered how to use "return" on Feditext)