i find it very interesting that llama.cpp, a project presumably maintained and contributed to primarily by people who generally have a decent understanding of LLMs, has a strict policy against predominantly LLM-generated contributions, bans accounts who violate it, uses an AGENTS.md that tries to rope the agent into enforcing this and warning the user, and they have also discussed adding canaries to catch people lying about AI use and not sufficiently reviewing its output