RE: https://hachyderm.io/@nedbat/116133445557306539

I got Ned's point, but I don’t think we can treat Claude (or similar tools) at the same level as a person.

We've never added tools (e.g. isort, Black, Ruff, ...) as co-authors of commits, even when they generated 100% of a commit.

Listing Claude as a co-author of a commit put it the same level as a person, but it's a tool.

The author of a commit is a person responsible for the code they submit, without shifting that responsibility to the tool, or worse, to the project maintainers.

#FOSS #AI

After thinking a bit more about Ned’s post and the discussion here, it really felt like the right moment to make expectations around AI-assisted contributions clearer in Django.

So I opened a proposal to add an AI/LLM contribution policy.

The idea isn’t to police tools, but to keep responsibility clearly human and reduce ambiguity for contributors and maintainers.

If you’re interested, have a look and share your thoughts:
https://forum.djangoproject.com/t/proposal-add-an-ai-llm-contribution-policy-to-django/44298

Proposal: Add an AI/LLM Contribution Policy to Django

If you use AI-generated content, you currently cannot claim copyright on it in the US. When coding, if you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on the entire codebase. This means copyright notices and even licenses that folks put on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain. Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB1...

Django Forum
@paulox I am not a lawyer but I think you should probably have one review this before making any decisions. As far as I am aware, "who unequivocally owns the copyright" is simply not how this works in the US at this time. LoC and the copyright office have been very clear that works generated by LLMs or related tech do not qualify for copyright so no one owns it.

Thanks @coderanger but the wording I quoted comes from Hynek’s `attrs` project, and he isn’t a US citizen, so US-specific copyright rules weren’t necessarily the reference point there.

That said, my intent isn’t to copy that document verbatim. I’m using the `attrs` AI policy as inspiration and a starting point, and any policy for Django would need to be adapted accordingly, especially since Django is a US-based foundation and the legal framing will likely need to be different.

@paulox I am far less knowledgeable about EU law but my understanding is it’s much the same there. The UK does have explicit provisions for non human authored works but the EU has voted to continue requiring human authorship for copyright protection.
@paulox do you know if this something the AI research team has discussed?

Hi @CodenameTim , I just opened the discussion on the forum now, it’s definitely something the AI WG could discuss there, but the forum felt like a good place to start and gather broader community input.

If it turns out we need something more focused, we can always take it back to the AI WG for deeper reflection.

Also… strange to read you this early in the morning 🙂

@paulox yeah, I had a bad night's worth of sleep and woke up a bit earlier than usual.

@CodenameTim @paulox sleep is overrated (~4hrs).

I find the linked argument to be neither valid nor sound. As you point out, a tool is not a person and the pull requests are the responsibility of the person to ensure the code does as expected, not the tool. I suspect that using black poorly could result in an unwanted change just as much as Claude could.

The author is the person. Where relevant the tool that made the change can be mentioned in the PR body itself.

Thanks @calum for the contribution to the discussion.

If we agree that Claude is just a tool, like Black, isort, or Ruff, then we should also agree that tools shouldn’t list themselves as co-authors of commits.

We never had to make a rule about Black or isort doing that, because they never tried. If newer tools insist on adding themselves in “Co-authored-by”, an explicit rule may simply prevent that bad behavior.

#FOSS #AI

@paulox this feels intuitively reasonable and the logic appears, at a cursory reading, sound. I've no doubt other opinions will show themselves, so it will be interesting to see where things go.
@CodenameTim I hope you can recover the sleep you lost! In my case, reading a demanding book before bed usually does the trick. I fall asleep very well after that.
@CodenameTim @paulox Not yet. They are still trying to get started. It was top of mind when talked about the board being stretched too thin to effectively drive each one.
@paulox I’ll try to find time to write a more nuanced response, but one thing I think cannot be ignored is this https://zomglol.wtf/@jamie/116059523957674208
Jamie Gaskins (@[email protected])

Attached: 2 images If you use AI-generated code, you currently cannot claim copyright on it in the US. If you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on *the entire codebase*. This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain. Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf

zomglol

@fallenhitokiri @paulox Folks, just a reminder not to take legal advice from people from the internet who are not lawyers. They very highly highlighted and clipped three summaries from three cases which have no bearing on the recommendation.

Check out https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf and you'll see that the claim is largely 💩

They recommend not allowing 100% generated project copyright, but our existing laws allow everything else per their guidance/recommendation. Read the PDF.

@webology @paulox thank you! I appreciate the clarification. As someone not too close to US law, I read the PDF slightly differently.

@fallenhitokiri @paulox It's all good. There are three parts to it. They keep filing as they figure it out. Overall, I am impressed by it, but I haven't read part three yet.

At the end of the day, these are just tools that people can do good and bad, but mostly in the middle. And it sucks to see them used for bad and causing grief.

@webology @paulox totally agree.

My personal experience with agentic editing was not very good so far, but neither were IDEs, debuggers or auto complete when I used it them first time.
I consider LLMs the same - a tool that helps me go form a to b, but not waiver to be lazy with what’s committed or pushed. (with a wider spectrum of amazing to miserable and some concerns around their creation and use)

@fallenhitokiri

Speaking of model hype (me, not you), I am excited but a bit time-poor to try out https://ollama.com/library/qwen3.5 and https://ollama.com/library/lfm2, which I'm seeing deliver better-than-I-thought-possible results for local models. Not sure if you are still dabbling locally, but I hope to have time and RAM to try them out this weekend.

qwen3.5

Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.

@webology I am building my local AI around lfm2.5-thinking for tool routing right now. It’s… okay but I expected more from the marketing.

Qwen 3.5 currently runs in my Studio and is humming along nicely. Very competent and so far didn’t mess up. But I haven’t had much time with it.

So far the strongest contender to build my research agent around and one of the first where online models don’t deliver better results in limited testing.

@fallenhitokiri this feels like how our generation sets up playdates for our LLMs. 😂
@webology I hate how much I just laughed out loud reading this 😂
Proposal: Add an AI/LLM Contribution Policy to Django

If you use AI-generated content, you currently cannot claim copyright on it in the US. When coding, if you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on the entire codebase. This means copyright notices and even licenses that folks put on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain. Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB1...

Django Forum
@paulox @fallenhitokiri le sigh. That felt gross.
@paulox @fallenhitokiri When I meet with our Trademark lawyer/law firm on Monday, I plan to mention this too. While Trademark and Copyright are two different things, I suspect they come up quite a bit in their watercooler conversations.

@paulox I'm not sure I agree with this take. Disclosing contributions can make sense when debugging: I would tend to be more critical of these commits when bisecting failures later.

That doesn't make sense necessarily since humans make mistakes as well. Still, it's a data point, and simply stripping the attribution doesn't improve the code in the commit at all.

Thanks @matthiask this is a helpful perspective.

I’d keep `Co-authored-by` for people who actually contributed and share responsibility for the code. If it’s useful, context can live in the commit message instead e.g. noting AI assistance (Claude), or tools like Ruff, Black, or PyUpgrade that shaped or generated parts of the diff.

Claude may be more advanced, but it’s still a tool. The developer submitting the commit remains responsible, and that’s what authorship should reflect.

@paulox I absolutely love this take ❤️ It should be a devs responsibility to make sure the tools they're using output something sensible, else it just looks bad on them!

Howevee, at least the maintainers get to choose the tools and linters their projects use. If someone reformats a codebase with a different linter (or config), it's no surprise if it's rejected. With LLMs, it's not a maintainers decision, and often takes more effort to spot issues than a giant diff.

@paulox @nedbat we had this discussion at work recently. We agreed quickly that recording LLMs as the *author* of a commit was never acceptable: a person had to take responsible for committing to the repository.

Less obvious was whether an LLM could be a co-author (i.e. included in a `Co-authored-by:` trailer). We eventually decided that they couldn't: just like formatters and other tools (like code mod scripts), they're just that: tools.

@paulox @nedbat One of the things we use co-authors for is to know who to talk to for more context about a change if the author is unavailable (or has left us). An LLM can't do that.

We do want to keep track of LLMs being used to produce whole commits, so we've stated using a `Assistant-model:` trailer for that.

@paulox

> We've never added tools (e.g. isort, Black, Ruff, ...) as co-authors of commits, even when they generated 100% of a commit.

Yes, we have:

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>