George R.R. Martin Is Carving Up OpenAI in Court, So Far
George R.R. Martin Is Carving Up OpenAI in Court, So Far
This isn’t about empowering copyright, but interpreting the law as it stands. If we held corporations to the same standards as regular people, it would expose how our existing laws don’t fulfill the core goals of copyright. I like Giovan H’s take on this:
“the philosophical doctrine of copyright is actually remarkably sound; the goals work, but the system of power has gone rotten.”
Part of why copyright is so fucked is because there are some who are protected by it, but not bound by it, and others who are bound by it but not protected by it. We won’t see meaningful change to copyright law as long as that continues to be the case
Does he have to finish the series?
No.
Would I vote to convict someone that Misery’s him into finishing it?
Also no.
No, Brandon Sanderson would be an awful choice to finish ASOIAF. He can’t write grim dark as he’s said himself.
Plus, maybe we don’t speculate on author’s deaths because you want the books to come out. It’s a real dick thing to do.
So, this is what I understood so far:
A group of authors, including George R.R. Martin, sued OpenAI in 2023. They said the company used their books without permission to train ChatGPT and that the AI can produce content too similar to their original work.
In October 2025, a judge ruled the lawsuit can move forward. This came after ChatGPT generated a detailed fake sequel to one of Martin’s books, complete with characters and world elements closely tied to his universe. The judge said a jury could see this as copyright infringement.
The court has not yet decided whether OpenAI’s use counts as fair use. That remains a key legal question.
This case is part of a bigger debate over whether AI companies can train on copyrighted books without asking or paying. In a similar case against Anthropic, a court once suggested AI training might be fair use, but the company still paid $1.5 billion to settle.
No final decision has been made here, and no trial date has been set.
Day 30: by cleverly posting primarily in !fuck_AI, the humans believe I am one of them. Passing this Lemmy-based turning test proves the value of LLMs. The secret to mass LLM acceptance is to flood social media with critical statements about AI and helpful summaries of bad AI press, all generated by a Large Language Model.
Boiling the oceans was worth it all along ;emdash; fuck_FISH!
Just forget for a second that this has anything to do with AI specifically: I wonder how it could possibly fall under fair use to grind up hundreds of thousands of pieces of copyrighted content, and then use that data to create software that you then profit from.
The question, as I see it, is if simply mashing all this intellectual property together – and deriving a series of weights for an AI model from that – somehow makes it not theft simply because all the content is smashed into one big pile of pink goo in which no single piece of content is recognizable.
Who do you think will be able to afford the training?
Nobody. LLMs are already unprofitable now when it’s free from copyright restrictions, if they had to actually pay for the proprietary data they’re taking then basically all US-based companies will be unable to afford training. This pops the bubble.
Trillion dollar companies want to make money, they aren’t just going to burn endless billions on super expensive and unprofitable tech that never turns a profit.
And those massive data centers make it basically impossible for anyone to ignore IP law. Is Apple going to become an outlaw company? Or is some underground pirate server farm going to host an LLM? There’s no way to actually dodge the law on this for US-based companies.
constant complaints by shitty fans (*cough*) acting like they are entitled to his work
Your favorite thing you wanted to happen has happened, Everyone lived the way you wanted them to ever-after. The End.