If you want to ensure your project remains copyrightable, non-infringing, secure, and maintainable, you should adopt a policy like

https://sciactive.com/human-contribution-policy/

Human Contribution Policy – SciActive Inc

#AI models are trained on publicly available code that may be subject to restrictive licensing conditions. These models may produce verbatim, near verbatim, or derivative snippets of this code, which would therefore be subject to the license terms under which that code was released. If these snippets are introduced into a code base with an incompatible license, that code base’s licensing could become legally unenforceable, or the maintainers of the code base could even be subject to lawsuits.
As of March 2026, the US #Copyright Office and several international bodies maintain that #AI generated material, without significant human “creative control”, cannot be copyrighted. This means that AI generated material often resides in the #publicdomain, making it impossible to enforce any license terms on the material.
New regulations may impose strict control or new #liability for #AI generated material. This may include, and may not be limited to, legal liabilities concerning #security vulnerabilities or “#hallucinated” sections of AI generated code.
#AI generated #code may be more likely than #humanauthored code to introduce #security vulnerabilities. These vulnerabilities can be very difficult to recognize during the code review process.
#AI #code often includes references to non-existent dependencies. These references are commonly called “#hallucinations”. A new type of #attack has arisen that involves an attacker registering a package whose name is frequently hallucinated. When AI code containing this #hallucination is accepted, and this dependency is installed, the attacker can ship #malicious code into the project’s build, introducing a major #security vulnerability. This type of attack has become known as “#slopsquatting”.
#AI material frequently mirrors industry-standard practices and lacks context of the project’s architecture and practices, resulting in #code that can duplicate efforts, neglect the project’s own practices, and deviate from the project’s overall planned architecture, and documentation inconsistent with existing standards and practices. As more of a project becomes AI generated, the original architecture can become obscured, resulting in a code base that is no longer coherent.
The users of the project have an implicit trust that the maintainers of the project understand the #code contained within the project’s code base. #AI generated code, which may not even be fully understood by the contributor, breaks this implicit trust. AI generated code is the result of an automated process, not intentional #authorship. Allowing this code into the project’s code base would mean the maintainers of the project may not fully understand the code base.
#AI generated material can be created much faster than it can be reviewed. This imbalance can lead to a heavy burden on maintainers of reviewing an abundance of AI generated material that often has the appearance of high quality material, but contains many flaws. The difficulty of properly reviewing these contributions, along with the tremendous volume of these contributions, can lead maintainers to feel “burnt out” and leave the project.

This policy is distributed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, available at https://creativecommons.org/licenses/by-sa/4.0/. Anyone can share and reuse this policy, in its original form or modified, under the conditions of the CC BY-SA 4.0 license. The full legal text of the license is available at https://creativecommons.org/licenses/by-sa/4.0/legalcode.en. If you modify this policy, rename the policy in order to avoid the implication that the modified policy is endorsed by SciActive Inc.

😁

Deed - Attribution-ShareAlike 4.0 International - Creative Commons