so 3 courts + US Copyright Office say you cannot copyright nor patent anything made primarily with LLMs because automata aren't human.

#SCOTUS won't review these rules because copyright is meant to protect human creations, not software or automata.

this may mean #AWSlop #Microslop are “de-copyrighting” & “de-patenting” their own proprietary software as they let automata “code” 🧐

❝ AI-generated art can’t be copyrighted after Supreme Court declines to review the rule
https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright

AI-generated art can’t be copyrighted after Supreme Court declines to review the rule

The US Supreme Court has declined to hear a case over whether AI-generated art can be copyrighted.

The Verge

BTW

as Google attempts to turn #Android phones proprietary, what with the way techbros have conspired to use embeddables as backdoors; should be interesting to do a full auditing of the hardware and software used in Android phones specifically manufactured for the USA market.

basically, techbros have hidden behind “trade secrets” and "security" to take control away from us.

i would assume auditing for what’s built with automata should render that proprietary part null.

@blogdiva That's probably valid in USA, but world is grossly cut in 5 sections in terms of copyright laws and in Europe it's mostly Geneva convention, an idea can't be protected (code included) as long it's a direct copy (and need to be proven) of a text And in genral anything related to material directly created text, image, art in general is directly copied (and can be proven) this violates the law. So in EU OpenAI and a lot of AI models are illegal to produce, to operate, it even bring proofs
@blogdiva of such copyright infrighment since AI is able to spit out directly some training data without alteration of the generation/inference process.
@blogdiva Ah and worst of all, they make money with it, which is aggravating.

hence the use of US, as in UNITED STATES 🙄

@DarkRedman

@blogdiva that's silly, it's like saying something written by a typewriter is not copyright-able because it was made by a machine.. The "AI" program was made by a human in the first place, it's just slightly more sophisticated..

@elduvelle @blogdiva When you copyright a book, you’re not copyrighting the output of your typewriter; you’re copyrighting your work.

The AI program can be copyrighted. Its output can’t.

It’s pretty consistent.

@drahardja Hmmm.. not sure.. but this made me think more about it: say, the typewriter is actually changing the inputted letters a bit, for example it's changing some of the Ts into Ss and maybe the author notices it and likes the output, or not, but in any case they want to copyright the resulting book (with the "typos"). That would be valid, right?

Now, isn't the output of an LLM a combination of its inputs (prompt) and its internal machinery (transforming the inputs)? So why can't the output be copyrighted?

Edit: we should probably also consider the training set as part of the inputs, but I still don't think the output can't be copyrighted. However, who would benefit from the copyright is a good question, probably all the authors of the work that went into the training set + the person who wrote the code of the LLM + the person who wrote the prompt..

@elduvelle

EDIT: As @LeslieBurns says below, this is INCORRECT.

I’m not a lawyer. But intuitively, as the SCOTUS implies, copyright protects the work of humans. When writing a prompt to generate art, a machine is performing the vast majority of the transformation from the billions of works it ingested, not the human. Granted, *how much* human work needs to happen for something to be “transformative” (and thus grant the person a copyright) has been a subject of debate for decades, but generative AI is nowhere close to that threshold IMO.

@drahardja
I agree to some extent, and I'm also not a lawyer, but instead of saying that the output of a LLM can't be copyrighted, I think it would mean that the question is who should benefit from the copyright (or patent). Certainly not just the person who entered the prompt. Instead it would be more like a group work: all of those who contributed to any of the LLM's inputs: all the authors of the stolen work + the person who programmed the LLM + the person who prompted the LLM. The machine itself is not doing any work - just following instructions, like my typewriter, but in a more complex manner.

(Edited my previous post to add this)

It's definitely interesting to think about it!

@elduvelle @blogdiva if a typewriter were mashing up the writing from great novels written on other typewriters across time & space

@jaystephens
Right.. But a typewriter wouldn't do anything on its own, just like a LLM wouldn't do anything on its own, without a human telling it what to do. Both need input from the human and they transform this input into something else. The difference is the LLM got some preprogrammed input (indeed, some of it part of its training set which is a mash up from actual people's novels, etc.) as well as the current input, provided by the human prompt.

The LLM is not anything like an independent entity creating anything.. it's just some code doing what it's programmed to do

@elduvelle
"its training set which is a mash up from actual people's novels, etc" is the key point.
The output cannot be considered only the result of the prompt, which was the only work done by the user.

@jaystephens

Definitely, see my other answer here
https://neuromatch.social/@elduvelle/116161779140284723

In the end I'd say the question is "who should benefit from the copyright", not whether the LLM's output is copyrightable or not, because I don't see why it wouldn't be. Obviously it's not going to be easy to figure it out, but in theory all those who contributed to the output (including in the training set) should be considered as contributors. The LLM itself, like a typewriter, is not a contributor.

El Duvelle (@[email protected])

@[email protected] I agree to some extent, and I'm also not a lawyer, but instead of saying that the output of a LLM can't be copyrighted, I think it would mean that the question is who should benefit from the copyright (or patent). Certainly not just the person who entered the prompt. Instead it would be more like a group work: all of those who contributed to any of the LLM's inputs: all the authors of the stolen work + the person who programmed the LLM + the person who prompted the LLM. The machine itself is not doing any work - just following instructions, like my typewriter, but in a more complex manner. (Edited my previous post to add this) It's definitely interesting to think about it!

neurospace.live
@elduvelle
Yeah that would be a fair outcome.
It rather raises the question of to what extent the intended purpose of commercial LLMs as they actually exist is to obfuscate things precisely so that any outcome like that is unachievable.

@elduvelle @jaystephens
Your continuing not to see why LLM output can't be copyrightable is neither here nor there. It can't. The part written by the human is the prompt itself. You could copyright that, sure. It just isn't useful.

If you could get a court to agree copyright went to all human contributors of the training data, then *nobody* could benefit from it, as nobody would have a right to make copies of it without *all* the contributors or their estates granting a license.

@petealexharris yeah, obviously the fact that the LLM's output comes from untraceable and sometimes stolen data is a problem.
My main point is that the SCOTUS considering that the output of an LLM is somehow the "creation" of software, instead of considering it the creation of a group of humans, is silly and wrong. It's as if they fell in the trap of considering as a separate entity as if it was some kind of actual artificial intelligence.. which it really is not.

Software doesn't "create" anything, and the output of a software like photoshop is not different from the output of software like a LLM, it's still created by humans in the first place. The only difference is that we can't easily track the origin of the LLM's output.

@jaystephens

@elduvelle @jaystephens
If you can't track from the creative input of the human to the output, there's no provenance to attach ownership to. If you can identify that it contains unlicensed copyrightable material then it's infringing. Obviously you can't assert copyright on someone else's work, and if it's a mix, nobody can. The courts know it's a mess, and I suspect are refusing to make it worse.
@elduvelle @blogdiva a compiler is copyrighted, but the code generated by that compiler falls under the license of the compiled code, not the compiler's
@elduvelle @blogdiva Genuinely curious, are you always this silly or do you just play ridiculous as a Reply Guy?
@DrSaucy I'm not sure what your problem is, but are you sure you are answering to the correct post? Reply guy? What is ridiculous in my post?
@elduvelle I've no problem & I'm quite certain my reply was to your sophomoric response to the OP.
@DrSaucy that doesn't explain what you didn't like in my answer, but ok

@blogdiva Does this mean all those AI-generated ads are not copyrightable?

Time to remix.

https://www.nbcnews.com/tech/innovation/coca-cola-causes-controversy-ai-made-ad-rcna180665

Coca-Cola causes controversy with AI-made ad

Coca-Cola is facing backlash online over an artificial intelligence-made Christmas promotional video that users are calling “soulless” and “devoid of any actual creativity.”

NBC News
@drahardja Even more of a threat to film and music execs and producers wanting to use AI for films, TV and music. This could devalue those threats to human content creators.
@calbearo @drahardja yeah, pretty excited to start remixing Aranofsky's slop Revolutionary War series!
@drahardja @blogdiva
Copyrights are only to protect the Epstein class, silly.
Satya Nadella says as much as 30% of Microsoft code is written by AI

Microsoft CEO Satya Nadella on Tuesday said that as much as 30% of the company's code is now written by artificial intelligence.

CNBC
@Viss that is EXACTLY the admission i was thinking of. also, the AWS “agentic” fiasco that deleted a whole server farm, or whatever it was? yah. should be interesting.
@blogdiva @Viss Well, it's on their employees to decide whether to release AI output on the Internet. They can, no legal contract can forbid them to do so.
@blogdiva Also could make it harder for Hollywood and TV production studios, who are probably thinking they'll go full AI at some point in the coming years.
@blogdiva Those rulings would probably only apply to the LLM generated parts; any real software product would be a mix of human-designed and AI generated parts, so it would presumably still have copyright protection. Now it is possible that a software product that is entirely "vibe coded" isn't copyrightable in the US, but currently those products suck too badly to be worth stealing.

❝ any real software product would be a mix of human-designed and AI generated parts, so it would presumably still have copyright protection ❞

no, not necessarily.

IANAL but my impression is that they're extrapolating from measures used for determining plagiarism cases; along with case law involving FLOSS, the most famous the decades of Unix vs Linux battles.

again, this isn't my bread and butter but the techbros involved should know better. the proprietary claimants famously lost.

@not2b

@not2b @blogdiva the question is, how much human coding is required to make vibe code copyrightable? A single line? Meaningful modification to function? High level architecture with vibe coded boilerplate only?
@not2b @blogdiva Nobody is tracking which line of code is generated by AI vs written by a human. So any changes made since a company adopted AI as a coding tool are at least at risk here.
@blogdiva "Primarily"? They will just hire a human to append their name. Pragmatically, there will be no way to tell.
@geolaw not necessarily. am almost certain a paper just came out about how to reverse engineer a whole Gemini summary, to track down the sources plagiarized

@blogdiva Good point, maybe we can #DeMicrosoft the world, by arguing that, we could, potentially, make MS Apps, Software and maybe even Windows #OpenSource.

I know, dreaming...

@blogdiva

If an AI/LLM reverse engineers the Windows codebase, and publishes the results, is this a Copyright violation?

What if Copilot does this? Is it a contract violation?

Did Copilot sign a NDA?

#CopyRight #AI #Insanity

@SpaceLifeForm @blogdiva well since these days MS seems to be updating the Windows codebase using vibe coding then none of it is copyright anyway.
@blogdiva Even the worst SCOTUS of my lifetime says, "If you can't be arsed to make it, I can't be bothered to copyright it."
@blogdiva i have a feeling this will eventually be heard and ruled in favour of the corporations when enough big corps have more AI garbage than actual human work, just like how they ruled corporations are people when it comes to election financing.
@blogdiva
Who the heck would want Microslop code???
@BoloMKXXVIII @blogdiva so much stuff still only works on microslop windows. if windows was open source it would be so fucking cool actually because then there could be an actually good version of it for people to use.

@blogdiva I'm ignorant in the language here. Does "decline to make a ruling" mean they don't want to step on anyone's toes, or they don't think there's a case?

Could this rear its head again later?

IANAL and haven't check directly with the SCOTUS archives, but my understanding is that it can be both. could be clarified if any of the justices included a comment in the decision (sometimes they stickem in the footnotes, which is why it’s always good to follow up with the OP)

@abmurrow

@abmurrow @blogdiva (I'm not a lawyer, but) SCOTUS is primarily an appellate court, they take 99% of cases on appeal, at their discretion. Declining to take the cases means that the lower (circuit) courts' rulings stand, and remain as binding precedent in those circuits. There's multiple reasons not to take an appeal and to my knowledge they don't publish explanations for declining to take appeals, but probably they either think the lower court is very likely right, and/or they think it's just not important enough to give it some of the limited space on their docket.

Technically no court case is *truly* final as sufficiently motivated lawyers and judges can get even decades-old settled precedent overturned, but it's not likely to here unless Congress passes a significantly reworked copyright act as the current statute seems pretty clear about the whole "human creativity" thing (demonstrated by several courts agreeing) even if the language is a little more legalistic than that.

@blogdiva Finally some good news!
@blogdiva the bad news are: just because they loose copyright or patenent (which they even don't have in europe by definition), it does not mean that suddenly the source code appears to public

@blogdiva

They said: Microsoft loves Open source

@donelias @blogdiva
I know that's a joke, but Open Source or Free Software licenses depend heavily on functional copyright protection. So if AI generated contents cannot be copyrighted, and a software is/contains AI generated code, it's not F/OSS either, it's a lawless land where every rules and laws doesn't work.