The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates

https://lemmy.world/post/19483350

The Irony of 'You Wouldn't Download a Car' Making a Comeback in AI Debates - Lemmy.World

Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images. This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned. This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them. Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative. While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical. For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744 [https://twit.tv/shows/floss-weekly/episodes/744]

Are the models that OpenAI creates open source? I don’t know enough about LLMs but if ChatGPT wants exemptions from the law, it result in a public good (emphasis on public).

The STT (speech to text) model that they created is open source (Whisper) as well as a few others:

github.com/openai/whisper

github.com/orgs/openai/repositories?type=all

GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision

Robust Speech Recognition via Large-Scale Weak Supervision - openai/whisper

GitHub

Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.

The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).

They are model-available if anything.

The Open Source Definition - Open Source Initiative

Introduction Open source doesn’t just mean access to the source code. The distribution terms of open source software must comply with the following criteria: 1. Free Redistribution The license shall...

Open Source Initiative

I did a quick check on the license for Whisper:

Whisper’s code and model weights are released under the MIT License. See LICENSE for further details.

So that definitely meets the Open Source Definition on your first link.

And it looks like it also meets the definition of open source as per your second link.

Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.

whisper/LICENSE at main · openai/whisper

Robust Speech Recognition via Large-Scale Weak Supervision - openai/whisper

GitHub

Whisper’s code and model weights are released under the MIT License. See LICENSE for further details. So that definitely meets the Open Source Definition on your first link.

Model weights by themselves do not qualify as “open source”, as the OSAID qualifies. Weights are not source.

Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.

This is not training data. These are testing metrics.

I don’t understand. What’s missing from the code, model, and weights provided to make this “open source” by the definition of your first link? it seems to meet all of those requirements.

As for the OSAID, the exact training dataset is not required, per your quote, they just need to provide enough information that someone else could train the model using a “similar dataset”.

The problem with just shipping AI model weights is that they run up against the issue of point 2 of the OSD:

The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost, preferably downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.

AI models can’t be distributed purely as source because they are pre-trained. It’s the same as distributing pre-compiled binaries.

It’s the entire reason the OSAID exists:

  • The OSD doesn’t fit because it requires you distribute the source code in a non-preprocessed manner.
  • AIs can’t necessarily distribute the training data alongside the code that trains the model, so in order to help bridge the gap the OSI made the OSAID - as long as you fully document the way you trained the model so that somebody that has access to the training data you used can make a mostly similar set of weights, you fall within the OSAID
  • Oh and for the OSAID part, the only issue stopping Whisper from being considered open source as per the OSAID is that the information on the training data is published through arxiv, so using the data as written could present licensing issues.

    Ok, but the most important part of that research paper is published on the github repository, which explains how to provide audio data and text data to recreate any STT model in the same way that they have done.

    See the “Approach” section of the github repository: github.com/openai/whisper?tab=readme-ov-file#appr…

    And the Traning Data section of their github: github.com/openai/whisper/blob/…/model-card.md#tr…

    With this you don’t really need to use the paper hosted on arxiv, you have enough information on how to train/modify the model.

    There are guides on how to Finetune the model yourself: huggingface.co/blog/fine-tune-whisper

    Which, from what I understand on the link to the OSAID, is exactly what they are asking for. The ability to retrain/finetune a model fits this definition very well:

    The preferred form of making modifications to a machine-learning system is:

    • Data information […]
    • Code […]
    • Weights […]

    All 3 of those have been provided.

    GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision

    Robust Speech Recognition via Large-Scale Weak Supervision - openai/whisper

    GitHub

    From the approach section:

    A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.

    This is not sufficient data information to recreate the model.

    From the training data section:

    The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages. As discussed in the accompanying paper, we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.

    This is also insufficient data information and links to the paper itself for that data information.

    Additionally, model cards =/= data cards. It’s an important distinction in AI training.

    There are guides on how to Finetune the model yourself: huggingface.co/blog/fine-tune-whisper

    Fine-tuning is not re-creating the model. This is an important distinction.

    The OSAID has a pretty simple checklist for the OSAID definition: opensource.org/…/the-open-source-ai-definition-ch…

    To go through the list of materials required to fit the OSAID:

    Datasets Available under OSD-compliant license

    Whisper does not provide the datasets.

    Research paper Available under OSD-compliant license

    The research paper is available, but does not fit an OSD-compliant license.

    Technical report Available under OSD-compliant license

    Whisper does not provide the technical report.

    Data card Available under OSD-compliant license

    Whisper provides the model card, but not the data card.

    Fine-Tune Whisper For Multilingual ASR with 🤗 Transformers

    We’re on a journey to advance and democratize artificial intelligence through open source and open science.

    Nothing about OpenAI is open-source. The name is a misdirection.

    If you use my IP without my permission and profit it from it, then that is IP theft, whether or not you republish a plagiarized version.

    So I guess every reaction and review on the internet that is ad supported or behind a payroll is theft too?
    No, we have rules on fair use and derivative works. Sometimes they fall on one side, sometimes another.

    Fair use by humans.

    There is no fair use by computers, otherwise we couldn’t have piracy laws.

    OpenAI does not publish their models openly. Other companies like Microsoft and Meta do.
    If they can base their business on stealing, then we can steal their AI services, right?
    Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.
    Also, ingredients to a recipe aren’t covered under copyright law.
    To take a poke at your lovely strawman - ingredients to a recipe may well be subject to copyright, which is why food writers make sure their recipes are “unique” in some small way. Enough to make them different enough to avoid accusations of direct plagiarism.

    In what country is that?

    Under US law, you cannot copyright recipes. You can own a specific text in which you explain the recipe. But anyone can write down the same ingredients and instructions in a different way and own that text.

    Keep in my that “ingredients to a recipe” here refers to the literal physical ingredients, based on the context of the OP (where a sandwich shop owner can’t afford to pay for their cheese).

    While you can’t copyright a recipe, you can patent the ingredients themselves, especially if you had a hand in doing R&D to create it. See PepsiCo sues four Indian farmers for using its patented Lay’s potatoes.

    No, you cannot patent an ingredient. What you can do - under Indian law - is get “protection” for a plant variety. In this case, a potato.

    That law is called Protection of Plant Varieties and Farmers’ Rights Act, 2001. The farmer in this case being PepsiCo, which is how they successfully sued these 4 Indian farmers.

    Farmers’ Rights for PepsiCo against farmers. Does that seem odd?

    I’ve never met an intellectual property freak who didn’t lie through his teeth.

    Protection of Plant Varieties and Farmers' Rights Act, 2001 - Wikipedia

    I think there is some confusion here between copyright and patent, similar in concept but legally exclusive. A person can copyright the order and selection of words used to express a recipe, but the recipe itself is not copy, it can however fall under patent law if proven to be unique enough, which is difficult to prove.

    So you can technically own the patent to a recipe keeping other companies from selling the product of a recipe, however anyone can make the recipe themselves, if you can acquire it and not resell it. However that recipe can be expressed in many different ways, each having their own copyright.

    Yes, that’s exactly the point. It should belong to humanity, which means that anyone can use it to improve themselves. Or to create something nice for themselves or others. That’s exactly what AI companies are doing. And because it is not stealing, it is all still there for anyone else. Unless, of course, the copyrightists get there way.
    Unlike regular piracy, accessing “their” product hosted on their servers using their power and compute is pretty clearly theft. Morally correct theft that I wholeheartedly support, but theft nonetheless.
    Is that how this technology works? I’m not the most knowledgeable about tech stuff honestly (at least by Lemmy standards).
    There’s self-hosted LLMs, (e.g. Ollama), but for the purposes of this conversation, yeah - they’re centrally hosted, compute intensive software services.
    How do you feel about Meta and Microsoft who do the same thing but publish their models open source for anyone to use?
    Well how long to you think that’s going to last? They are for-profit companies after all.
    I mean we’re having a discussion about what’s fair, my inherent implication is whether or not that would be a fair regulation to impose.

    Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.

    The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).

    Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.

    They are model-available if anything.

    The Open Source Definition - Open Source Initiative

    Introduction Open source doesn’t just mean access to the source code. The distribution terms of open source software must comply with the following criteria: 1. Free Redistribution The license shall...

    Open Source Initiative

    For the purposes of this conversation. That’s pretty much just a pedantic difference. They are paying to train those models and then providing them to the public to use completely freely in any way they want.

    It would be like developing open source software and then not calling it open source because you didn’t publish the market research that guided your UX decisions.

    Tell me you’ve never compiled software from open source without saying you’ve never compiled software from open source.

    The only differences between open source and freeware are pedantic, right guys?

    Tell me you’ve never developed software without telling me you’ve never developed software.

    A closed source binary that is copyrighted and illegal to use, is totally the same thing as a all the trained weights and underlying source code for a neural network published under the MIT license that anyone can learn from, copy, and use, however they want, right guys?

    You said open source. Open source is a type of licensure.

    The entire point of licensure is legal pedantry.

    And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

    And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

    No, they’re not. Which is why I didn’t use that metaphor.

    A binary is explicitly a black box. There is nothing to learn from a binary, unless you explicitly decompile it back into source code.

    In this case, literally all the source code is available. Any researcher can read through their model, learn from it, copy it, twist it, and build their own version of it wholesale. Not providing the training data, is no different than saying that Yuzu or an emulator isn’t open source because it doesn’t provide copyrighted games.

    i feel like its less meaningful because we dont have access to the datasets.
    “but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!”
    You drank the kool-aid.

    So, is the Internet caring about copyright now? Decades of Napster, Limewire, BitTorrent, Piratebay, bootleg ebooks, movies, music, etc, but we care now because it's a big corporation doing it?

    Just trying to get it straight.

    You tell me, was it people suing companies or companies suing people?

    Is a company claiming it should be able to have free access to content or a person?

    Just a point of clarification: Copyright is about the right of distribution. So yes, a company can just “download the Internet”, store it, and do whatever TF they want with it as long as they don’t distribute it.

    That the key: Distribution. That’s why no one gets sued for downloading. They only ever get sued for uploading. Furthermore, the damages (if found guilty) are based on the number of copies that get distributed. It’s because copyright law hasn’t been updated in decades and 99% of it predates computers (especially all the important case law).

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works. Which is kinda what’s going on but also not really. Let’s say that someone asks ChatGPT to write a few paragraphs of something in the style of Stephen King… His “style” isn’t even cooyrightable so as long as it didn’t copy his works word-for-word is it even a derivative? No one knows. It’s never been litigated before.

    My guess: No. It’s not going to count as a derivative work. Because it’s no different than a human reading all his books and performing the same, perfectly legal function.

    It’s more about copying, really.

    That’s why no one gets sued for downloading.

    People do get sued in some countries. EG Germany. I think they stopped in the US because of the bad publicity.

    What these lawsuits against OpenAI are claiming is that OpenAI is making a derivative work of the authors/owners works.

    That theory is just crazy. I think it’s already been thrown out of all these suits.

    The Internet is not a person

    People on Lemmy. I personally didn’t realize everyone here was such big fans of copyright and artificial scarcity.

    The reality is that people hate tech bros (deservedly) and then hate on everything they like by association which is hey everyone is now dick riding the copyright system.

    The reality is that people hate the corporations using creative peoples works to try and make their jobs basically obsolete and they grab onto anything to fight against it, even if it’s a bit of a stretch.

    I’d hate a world lacking real human creativity.

    Real human creativity comes from having the time and space to rest and think properly. Automation is the only reason we have as much leisure time as we do on a societal scale now, and AI just allows us to automate more menial tasks.

    Do you know where AI is actually being used the most right now? Automating away customer service jobs, automatic form filling, translation, and other really boring but necessary tasks that computers used to be really bad at before neural networks.

    And some automation I have no problems with. However, if corporations would rather use AI than hire creatives, the creatives will have to look for other work and likely won’t have a space to express their creativity, not at work nor during leisure time (no time, exhaustion, etc.). Something should be done so it doesn’t go there. Preemptively. Not after everything’s gone to shit. I don’t see the people defending AI from the copyright stuff even acknowledging the issue. Holding up the copyright card, currently, is the easiest way to try an avoid this happening.
    Personally for me its about the double standard. When we perform small scale “theft” to experience things we’d be willing to pay for if we could afford it and the money funded the artists, they throw the book at us. When they build a giant machine that takes all of our work and turns it into an automated record scratcher that they will profit off of and replace our creative jobs with, that’s just good business. I don’t think it’s okay that they get to do things like implement DRM because IP theft is so terrible, but then when they do it systemically and against the specific licensing of the content that has been posted to the internet, that’s protected in the eyes of the law
    What about companies who scrape public sites for training data but then publish their trained models open source for anyone to use?

    If they still profit from it, no.

    Open models made by nonprofit organisations, listing their sources, not including anything from anyone who requests it not to be included (with robots.txt, for instance), and burdened with a GPL-like viral license that prevents the models and their results from being used for profit… that’d probably be fine.

    And also be useless for most practical applications.

    We’re talking about LLMs. They’re useless for most practical applications by definition.

    And when they’re not entirely useless (basically, autocomplete) they’re orders of magnitude less cost-effective than older almost equivalent alternatives, so they’re effectively useless at that, too.

    They’re fancy extremely costly toys without any practical use, that thanks to the short-sighted greed of the scammers selling them will soon become even more useless due to model collapse.

    I mean openais not getting off Scott free, they’ve been getting sued a lot recently for this exact copy right argument. New York times is suing them for potential billions.

    They throw the book at us

    Do they though, since the Metallica lawsuits in the aughts there hasnt been much prosecution at the consumer level for piracy, and what little there is is mostly cease and desists.

    Kill a person, that’s a tragedy. Kill a hundred thousand people, they make you king.

    Steal $10, you go to jail. Steal $10 billion, they make you Senator.

    If you do crime big enough, it becomes good.

    If you do crime big enough, it becomes good.

    No, no it doesn’t.

    It might become legal, or tolerated, or the laws might become unenforceable.

    But that doesn’t make it good, on the contrary, it makes it even worse.

    It’s not hypocritical to care about some parts of copyright and not others. For example most people in the foss crowd don’t really care about using copyright to monetarily leverage being the sole distributor of a work but they do care about attribution.
    There is a kernal of validity to your point, but let’s not pretend like those things are at all the same. The difference between copyright violation for personal use and copyright violation for commercialization is many orders of magnitude.