Authors Are Furious After Finding Their Works on List of Books Used To Train AI

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

https://www.themarysue.com/authors-are-furious-after-finding-their-works-on-huge-list-of-books-used-to-train-ai/

Authors Are Furious After Finding Their Works on List of Books Used To Train AI

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

The Mary Sue
Here’s an idea, legally force companies like OpenAI to rely on opt-in data, rather then build their entire company on stealing massive amounts of data. Sam Altman was crying for regulations for scary AI, right?
Would search engines only be allowed to show search results for sources that had opted in? They "train" their search engine on public data too, after all.

They aren’t reselling their information, they’re linking you to the source which then the website decides what to do with your traffic. Which they usually want your traffic, that’s the point of a public site.

That’s like trying to say it’s bad to point to where a book store is so someone can buy from it. Whereas the LLM is stealing from that bookstore and selling it to you in a back alley.

AI isn’t either. It’s selling statistical data about the books.
It literally shares passages verbatim

It shares popular quotes from books, it can’t reproduce arbitrary content from a book. The content needs to be heavily duplicated in the training data to stick around, and even than half of it might still end up being made up on the spot.

Also request for copyrighted content will be blocked by ChatGPT and just receive the stock “I can’d do that” response anyway.

If you have some damning examples that show the opposite, show them.

Being blocked by ChatGPT just means that the interaction layer you see doesn’t show the output, not that the output wasn’t generated.

Everything you see that’s public facing and interfacing with an AI is an extreme filtering layer for what is output. There’s tons of checks that happen to ensure that they don’t output illegal content or any of a million other undesirable things.

I’m too lazy and care too little but you can basically get it to roleplay as a book expert or something and to “remind” you of certain passages. It gets around the filter pretty easily, that’s how jailbreaks work.
That’s maybe an issue. I mirror speech a lot, though. How large are the passages?
“I’m not reselling your book, I am selling a machine that holds a mathematical formula that partly represents your entire book word for word and can reprint it on command!”
I mean, yeah? They were running to a concrete description. That is not valid. My brain has most of Terry Pratchett’s works.
LLMs can't reprint their entire training data on demand. They rarely even remember quotes.