Due to security, ethical, technical and legal concerns, I’m going to spell out the AI policy for my OSS projects, which would cover everything under my username, my company (Osaka Red LLC) and the grig project. See my Codeberg.
In short, any use of LLM agentic coding is banned on all of my projects with a few exceptions, which will be listed at the end of this post. Of these no-AI projects, a couple have some LLM-generated code in them, but it is minimal and also listed at end of this post.
I don’t like genAI and I hate the tech industry’s cloud-based business models. My specific views aren’t necessarily the same as many of the anti-AI views found on the fedi but I won’t get into that here. On the other hand, I wanted to take advantage of what may be a very temporary situation where LLM agentic coding services are relatively affordable (pre-enshittification era) to boost OSS projects, particularly some that I have to get to a certain point soon.
I now think I was wrong to focus on the latter point and am now pulling back. The legal integrity of OSS is important so we need to be careful about this even if it could put us at a disadvantage for the time being (conversely, it may turn out to be an advantage). The issue is that the supreme court in the US (and courts all over the world!) haven’t settled the matter of how copyright applies to genAI output. It could be a disaster if OSS uses it extensively then any courts anywhere in the world find that this constitutes unauthorized relicensing.
So the basic AI policy on my projects are as follows:
Always respect the per-project AI policy. This means all projects except the few that allow it, listed at the end ban all use of genAI. At the time of this writing this hasn’t been reflected in the guidelines in the repos but I’ll work on that.
Always respect the per-issue AI policy. Even if a project generally allows AI it might disallow it for certain issues for technical or pedagogical reasons.
If AI is allowed, then we mostly go by fedify’s AI policies with the following modifications: a) genAI may not be used for any audio or visual media, only for text b) be extra careful that all comments are factually correct and not misleading c) LLMs are strongly discouraged for human-readable text, especially in discussions. Prefer posting in your own language instead of using machine translation if you’re not confident in English. d) AI may be used for translation of user-facing text (in UIs) and in documentation but only between languages you fully understand and can verify the accuracy of. e) No use of OpenAI or xAI products. Prefer local LLMs if possible.
Bots will be blocked as well as anyone who opens issues related to AI policy (you may discuss that under this post). Please don’t let the AI industry suck all the oxygen out of the room. There’s much, so much more to development than tools.
Exceptions, projects that do allow the use of AI (I have backup plans if I have to start over and write it all by hand):
- Sabratha and Oea, parts of the Tripoli project.
- Gallae - this doesn’t do much yet…
- nchant - repo doesn’t exist yet, but newer version of my audio DSL (fka hxal) that will work with grig.audio. Will have more detailed per-project policy.
Projects that fall under the ban of AI but already have some LLM-generated code:
- grig.controller - was used to make some tests and dummy classes used by the tests
- lightly used in haxe-wordpress.
🦀

