What's Debian's AI policy? I might try to install it with OpenRC if I can figure it out.
I tried searching for it so I could read it myself, but every search engine shows a million conflicting stories made in the past week, & every single one of them is randomly generated. There are so many that uBlacklist isn't helping. I also can't find it on debian.org.

@jackemled Debian right doesn't have a firm stance. The consensus so far seems to be case-by-case, but any contribution that does needs to specify that it was (which is difficult because there aren't reliable ways to tell, and basing arguments on copyright is difficult because jurisdictions differ and what is acceptable in one region can differ greatly in another). There's a draft general resolution, but it's in draft state because there isn't consensus on what "AI" means to them, and how they could restrict certain forms in the first place.

https://blog.desdelinux.net/en/Debian-debates-the-future-of-AI-models-in-its-ecosystem/

https://www.phoronix.com/news/Debian-DPL-Update-March-2026

Debian discusses the future of AI models in its ecosystem

Desde Linux

@jackemled Which is, unfortunately, the correct thing to do for them as a massive org operating largely with volunteer labor with a lot of reliance on their product. There's no point in having a "no AI" (or no LLM or whatever) policy if you can't strictly define what that looks like.

Is something like Intellisense classed as "AI" for this case? It's easy to say that entire massive generated sections is a thing to bar, but how can you tell how much was generated raw and how much was used as a start point before modifying from there? What if the entire thing was generated, but the person submitting the PR or whatever have you demonstrates that they clearly understand the output and have thoroughly reviewed it themself? Yeah it's easy to just reject PR's from agents so long that they're clearly labeled as such, but anything beyond that becomes challenging.

Raising matters with copyright just becomes a massive minefield due to licensing and IP law. You could make the case that it could accidentally mimic code from a project with an incompatible license which might raise legal issues, but how could anyone tell if it's actually copying that differently-licensed code or if it just happened to look like that? (and that's ignoring folks who do just do things similarly to projects licensed in other ways regardless, because why not borrow a general idea or concept that does what you want to do)

I don't think there's really a strong way to enforce any "no AI/LLM/generated code" policy in general, especially with a project as massive as Debian. It's all very reliant on trusting contributors.

@senil I would just say "no LLMs", because that's very simple & completely captures everything being marketed as "AI" right now, but it's hard to enforce. I have seen some good AI policies that handle this the best they can despite that though. Fedora tries to make sure LLM use is declared so those contributions can be easily ignored when a normal contribution for the same thing is made or more thoroughly investigated when there are no other contributions for the same thing. I can't remember exactly, but I think Linux's AI policy is "if your code is shit we are banning you, you must demonstrate a complete understanding of your contribution or else we are microwaving your hard drive!". Mastodon's AI policy seemed good to me too, but I'm not sure how it scales.