I have a problem with moves to add "AI disclosure declarations" to various things. For code and pull requests, it makes sense to disclose what parts of it were written by AI etc., if AI contributions are allowed by the maintainers.
On the other hand, we have university policies requiring students to "disclose" that their essay is written by AI, and scientific bodies like ACM have policies in place requiring disclosure that papers are in part written by AI.
The thing is, in these latter cases, two things are true:
1) It is actually not acceptable, in the domain, for *any part* of the work to be written by AI. Full stop. I DO NOT WANT TO READ SLOP. I DON'T CARE IF IT IS DISCLOSED AS SLOP. WE MUST NOT PUBLISH SLOP, EVEN WITH A DISCLAIMER. So a disclosure of that kind ought to mean a desk-rejection. It's like the police saying it's required to disclose that you've broken the law: idiotic.
2) Other uses of AI, including idea generation, rubber ducking, "search", etc., should not require "disclosure" any more than using Google or talking to a friend should. I don't "disclose" that I googled something, or that I went to the library and looked at some books. I do cite the things that I use, within appropriate scientific standards, but that's different than citing the tools I used in the course of thinking about things.
So for these reasons, I find the current thrust toward AI disclosure to be deeply misaligned with reality and scientific values.
