Apparently chardet got Claude to rewrite the entire codebase from LGPL to MIT?
https://github.com/chardet/chardet/releases/tag/7.0.0
That is one way to launder GPL code I guess?
Apparently chardet got Claude to rewrite the entire codebase from LGPL to MIT?
https://github.com/chardet/chardet/releases/tag/7.0.0
That is one way to launder GPL code I guess?
@scy
US court is leaning towards that LLM generated code is fundamentally not copyrightable.
This is a different problem to the moral issues I have with this.
@Foxboron Yeah but that's what I mean: Just because the end result is not copyrightable, does that automatically mean that it can't be a copyright violation?
Like, changing the format or medium of something is not a copyrightable work.
So, by that logic, if I take a copyrighted MP3 and convert it to AAC and publish that, my AAC is not copyrightable, but it's not a copyright violation to take it and publish it?
That's what I mean.
Supreme Court has already dismissed such cases.
So we are getting a precedent in US law. Yet to be settled in any high court in the EU though.
Sure, but we are not really looking at, nor discussing, cases where LLMs spits out something verbatim from another project in this case.
@Foxboron @joshbressers @scy Open-source projects that have sought to be compatible with proprietary software, e.g. Samba trying to be compatible with Windows SMB, etc., have (if I'm not misremembering) taken a "clean room" approach and outright stated they do not want any code from any developer who had even looked at the MSFT code for fear of being accused of infringement.
The copyrightability of LLM output is not relevant here - the only question is whether a court would consider the original license infringed upon in the creation of the output.
As I understand it, though, this is a reimplementation of a codebase by the same contributors -- Dan Blanchard seems to be the primary maintainer before and after the rewrite, so ISTM he'd be able to relicense the project regardless of whether it was passed through an LLM first.
It will be interesting when this happens because a company or person decides "I don't like copyleft, so I'll just run this through the LLM wash until I get a functional copy". But this doesn't seem to be that.
@scy @Foxboron @joshbressers I mean, they _can_ if they rewrite the code in question.
So here - *if* one of the LGPL code contributors is offended by the license change they could look at the new codebase and see if the new code resembles their contribution. Then they'd have to challenge it.
But projects have been relicensed without seeking permission from every contributor and/or by removing contributions if they cannot get approval. I'm not aware of any cases where a contributor has successfully challenged such - but there's always a first time.
Depends.
If you have a permissively licensed project, you can change the source to GPL by just using a poison pill approach.
This is what Forgejo did as an example.
https://forgejo.org/2024-08-gpl/
This works as the MIT license terms are met.
The other way would not work.
@Foxboron @jzb @joshbressers You're right, I should've worded that differently.
They can change the license, if the current license allows it.
Still, everyone keeps their individual copyright.
@scy @Foxboron It is absolutely a violation for the company which built the model to build a model which emits license-restricted code without following the terms of the license. The model doesn’t commit the violation any more than a photocopier does, of course.
The emitted code cannot be copyrighted at all, but if it emitted the code in a way which meets the terms of the license, the code would be covered by the original license.
@scy @Foxboron It's a bit complicated, actually. IANAL, but this is what I understand:
- The music notation is copyrightable, individual notes are not. A sequence of notes is debatable, and it depends highly on recognizability AFAIK.
- A music recording is copyrightable. Playing that music in a distinctly different arrangement, less of an issue.
- Arguably, a change in digital format is either still the same recording, or sufficiently indistinguishable from it.
- Copyright has an ancient...
@scy @Foxboron ... naming and goes back to a time where making copies and distributing them was the hard part.
This is a non-problem in the digital age, which is why it's fine to create backup copies of copyrighted works, so long as the people accessing them are always the people having purchased/licensed an original copy.
So LLMs training on GPL is not itself a copyright violation, and them reproducing similar code isn't either, but then publishing such sufficiently similar code is.
@thomasjwebb Right now, that is how SCOTUS is leaning regarding AI generated output. They refused to interfere with a patent application and "artist" copyright, leaving it up to the copyright and patent offices to decide, which they said no. Some guy used AI to create a beverage holder and light beacon using AI. When the patent was denied, he tried to copyright the AI created "artist" renditions to get around the patent.
https://www.supremecourt.gov/docket/docketfiles/html/public/25-449.html
YUP
copyright is for humans, not automata ―hard or soft.
so, ironically, the prompts are copyrightable but not the output.
so anything you want to copyright should not be prompted into a corporate regurgitation machine, including so-called grammar checkers.
@thomasjwebb @Foxboron @scy In the US, at least, human authorship is required for copyright, and if you try to copyright something that's a mix of AI and human generated then generally only the human generated part is copyrightable.
This is separate from the LLMs emitting text other people have written, so at *best* this code can't be licensed because it's not copyrightable, and at worst its license laundering and there's precedent (IIRC) for stomping on that hard.
@Foxboron @scy This means that anything "new" (i.e. nothing) the "AI" brought to the work is not a creative work that you can hold copyright to just because you were the person prompting/using the "AI".
It does NOT mean that the copyright on whatever the AI plagiarized is void. But that's how the industry will try to spin these rulings. We need to point out this distinction and fight their attempts to mislead in order to seize and enclose our work.
@Foxboron @[email protected] That'd be the US system. Then there's the various Euro systems that differ substantially. I'm certainly curious how this will turn out.
On the other hand: it'd require that those who can enforce their rights here actually do so.
Given that IP rights are normally enforced pretty harshly, even on consumers (anyone remember the days of the torrent c&d letters or the traditional find&ban the infringing exhibitor days on computex et al?) they're effectively completely ignored on FOSS.
There is virtually no education for biz, cs or law students on this topic, let alone mandatory ed.
Presenting the case of possibilities and rights to those who have them is often dismissed by those, especially developers on the younger side or those who are still in a "hobby" / "non commercial" stage. Only to shortly after complain about sustainability and demanding funding.
Instead we see demands to throw substantial amounts of tax money after random Foss projects on more or less random criteria and evaluators. Which will totally scale, right?
Virtually every company that was enforced against in terms of FOSS compliance ended up consciously allocating resources to FOSS in various ways. There are a lot of companies and they are a renewable resource in a functional economy.
But what do I know, rite? I just see the cases.
/rant