As predicted, ML models suffer irreversible damage when you train them on generated data, a phenomenon these researchers are calling "model collapse". Uncurated datasets are effectively poisonous.

Who, apart from anyone who thought about it for a few seconds, could have predicted.

https://arxiv.org/abs/2305.17493v2

The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

arXiv.org
@mhoye
Delightful. Hopefully that gives companies like OpenAI a few years to waste their time on.

@wakame
"I wish every AI bro a very model collapse"

@mhoye

@wakame @michaelcoyote @mhoye

But Eliezer Yudkowsky told me LLMs were simulating human minds and modeling the physical universe to generate their output, how could this be

@HeavenlyPossum @wakame @michaelcoyote He's also suffering from model collapse from training on garbage data, he's just ahead of the curve.
Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks

Large language models (LLMs) are remarkable data annotators. They can be used to generate high-fidelity supervised training data, as well as survey and experimental data. With the widespread adoption of LLMs, human gold--standard annotations are key to understanding the capabilities of LLMs and the validity of their results. However, crowdsourcing, an important, inexpensive way to obtain human annotations, may itself be impacted by LLMs, as crowd workers have financial incentives to use LLMs to increase their productivity and income. To investigate this concern, we conducted a case study on the prevalence of LLM usage by crowd workers. We reran an abstract summarization task from the literature on Amazon Mechanical Turk and, through a combination of keystroke detection and synthetic text classification, estimate that 33-46% of crowd workers used LLMs when completing the task. Although generalization to other, less LLM-friendly tasks is unclear, our results call for platforms, researchers, and crowd workers to find new ways to ensure that human data remain human, perhaps using the methodology proposed here as a stepping stone. Code/data: https://github.com/epfl-dlab/GPTurk

arXiv.org

@mhoye I don’t have the focus to unpack all the reasons, but this bit amuses me:

> We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web.

@mhoye I guess the short version is:
“Maybe that’s why you shouldn’t have been doing that. And when people told you to quit it, you should have listened…”

Oh yeah and: “This is what happens when you tell everyone to paste your word vomit everywhere, bc it’s the newest bestest word vomit.”

@glitchontwitch There's a lot to unpack here for sure.
@mhoye This got mentioned on the work Slack AI channel.. I called it "Inbreeding", which.. I don't think was unfair.
@mhoye Garbage in, garbage out.
@mhoye
"Replicative fading". As any xerox artist from the 80s could have guessed

@mhoye and the chaser...

https://arxiv.org/abs/2306.07899

> estimate that 33-46% of crowd workers used LLMs when completing the task

Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks

Large language models (LLMs) are remarkable data annotators. They can be used to generate high-fidelity supervised training data, as well as survey and experimental data. With the widespread adoption of LLMs, human gold--standard annotations are key to understanding the capabilities of LLMs and the validity of their results. However, crowdsourcing, an important, inexpensive way to obtain human annotations, may itself be impacted by LLMs, as crowd workers have financial incentives to use LLMs to increase their productivity and income. To investigate this concern, we conducted a case study on the prevalence of LLM usage by crowd workers. We reran an abstract summarization task from the literature on Amazon Mechanical Turk and, through a combination of keystroke detection and synthetic text classification, estimate that 33-46% of crowd workers used LLMs when completing the task. Although generalization to other, less LLM-friendly tasks is unclear, our results call for platforms, researchers, and crowd workers to find new ways to ensure that human data remain human, perhaps using the methodology proposed here as a stepping stone. Code/data: https://github.com/epfl-dlab/GPTurk

arXiv.org

@mhoye This reminds me of European royal families in-breeding, and developing no end of illnesses as a result. (Yeah, I know, "garbage in, garbage out", etc.)

@adamgreenfield

@mhoye @adamgreenfield Really _looking forward_ to OpenAI's extremely conceptual cover of "I am sitting in a room"

@mhoye we did predict this in 2020. Glad Ross did the math though.

https://berryvilleiml.com/results/ara.pdf

Registration Form ‹ Berryville Institute of Machine Learning — WordPress

@mhoye Wait you mean the mediocre white dude AI stans with absolutely no understanding of what they're doing were (gasp!) ~wrong~ when they insisted feeding the machine its own 💩 would result in exponential growth?? No way...
@mhoye I’m so excited to read this. I was wondering about just this issue this morning. “what happens when you feed the garbage machine garbage?”
@futurebird Yeah, turns out it's like putting the sausage back through the meat grinder over and over again, hoping that somehow you're going to make yourself a pet pig, instead of the weird, gross slurry you're inevitably going to end up with.
@mhoye It’s kind of like when you seal part of an ecosystem in a jar and biodiversity falls and falls and falls.
@mhoye @futurebird Doesn't this assume new models are going to continue using indiscriminately scraped training data, like most of the current generation? For a lot of (technical, ethical, legal) reasons imo that trick was only ever going to work once.
@n1ckfg @mhoye @futurebird what other options are there if the goal is cranking cost "efficiency"?

@kunev @futurebird @n1ckfg For a couple of decades there was a very valuable market for pre-WW2 salvaged shipwreck metal, to use in high-sensitivity radiological instruments; surface metals had been poisoned by nuclear testing, so untainted material was suddenly rare, expensive and necessary.

This is what we're going to see with training data sets; the highest-value work will be in exceedingly careful, tedious curation for date-range authentic, spam-filtered, human-sourced raw material.

@mhoye @kunev @futurebird @n1ckfg And I guess this will be called "Handcrafted LLMs" and sold at a premium by companies run by tattooed dudes with a ponytail. 😉

@random_musings @kunev @futurebird @n1ckfg

"Young people trying to build a meaningful future out of the world we've given them" aren't a class of people I'm inclined to condescend to, and computing as a field could stand to have a lot more curated craft and opinionated artisanalism in it than it does now.

@mhoye @kunev @futurebird @n1ckfg I agree, but simply can't see that happening with training data sets.

This is extremely tedious work that makes Wikipedia moderation look like a fun gig. So, I frankly speaking can't see open-source / open-knowledge type of communities evolving around this topic (but admittedly, I wouldn't have bet on Wikipedia working out before it came along).

Hence why I think that such handcrafted data sets and LLMs will end up being expensive artefacts sold at a premium.

@random_musings @kunev @futurebird @n1ckfg I think the opposite is going to be the case - that we're going to see communities emerge as people move away from centralized services, where those communities find value humanity but seek consensus around choosing to opt into participatory modelling.
@mhoye @kunev @futurebird @n1ckfg I like your optimism and hope the future proves me wrong. 😊
@kunev @futurebird @mhoye I think the costs are going to change, though--legal precedents on copyrightability, infringement, etc. will roll out around the world; "folks who can't rent a thousand H100s" form a very large market for efficient local compute solutions, etc. Firefly is an interesting data point for an attempt to conventionally license an entire corpus (Adobe Stock)

@n1ckfg @kunev @futurebird

I think that the computational costs of incremental improvement will get driven close enough to zero that assistive localhost stacks and personal model maintenance will become just another system-level background task, like filesystem maintenance. Start with a decent model, train it on a year of your own chat and sent-mail logs, here you are. Autotune for your own narrative voice.

@mhoye like a AI black hole?
@MicyMontu "information goes in, nothing of value comes back out", yeah. A singularity for sure, but the singularity the AI/EA clowns were expecting.

@mhoye

My prediction: In 15 years, the "open Internet", the one we currently rely on, the one where people (and SEO businesses) freely post and freely read, will be like landline phones are now - filled with mostly junk, and people who don't know how to get off of it.

I don't know what the parallel to cellphones - the thing most people will have migrated to - will be, but we will have definitely trashed the previous system.

@mhoye i kind of like the idea that the late 2022 version of chatgpt is as good as these things are ever going to get
@mhoye The scary thing for me is, I remember this being well understood back in the 2000s when I took my grad ML and CV classes, but we did focus a lot on evolutionary algorithms.
@mhoye we saw a degeneration even on smaller LLMs back in 2021 when the were fed synthetic data - we touched on it in our RANLP paper conclusion back at the time.

@mhoye then you have these clowns who just started counting money:

Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.

https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#key-insights

The economic potential of generative AI: The next productivity frontier

Generative AI’s impact on productivity could add trillions of dollars in value to the global economy—and the era is just beginning.

McKinsey & Company
@mhoye so basically, the rise of AI-generated content on the internet will destroy AI-generated content on the internet?

@mhoye

It's the oldest rule in computer science -- "Garbage in, garbage out" -- implemented as a feedback loop.

And yes, this was easy to predict as soon as LLMs became generally available. A coworker of mine pointed out that the Google search corpus from late 2022 has become a crucially important resource, as the last record of an infosphere not polluted by LLM output. There's a rather disturbing analogy here to the high market value of scrap steel from before 1945.

https://en.wikipedia.org/wiki/Low-background_steel

Low-background steel - Wikipedia

@isomeme I understand the idea, but I think it's pretty optimistic to think that the '22 google corpus is somehow usefully pure. SEO content farms have been around for only slightly less long than search engines.

@mhoye

Of course. Still, I think LLMs are already proving to be a vastly more damaging kind of pollution.

@mhoye We didn't need to predict it. We have seen it happen more than once, over the years, with other models. Remember that chatGPT allows us to say "This is great", "this sucks". we can even tell them why a result sucked.
@mhoye meanwhile didn't google or msft train a model by simply checking their answers against gpt4?
@mhoye Step 1: compute lots of averages on your data to be able to construct similar data
2: Randomly create similar data
3: Average random data
4: Be surprised the average is 0.
@mhoye
Is this like how my phone keeps "correcting" my text badly and then it remembers that's what I type all the time and it reinforces its own garbage output?

@mhoye The collapse might come sooner than expected: "He works fast and gets high marks for quality, he said, thanks to ChatGPT. The bot is wonderful, he said, letting him speed through $10 tasks in a matter of minutes. When we spoke, he was having it rate another chatbot’s responses according to seven different criteria, one AI training the other."

from 20 June, The Verge https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots

AI Is a Lot of Work

How many humans does it take to make tech seem human? Millions to support OpenAI, Google, Meta, and every other major tech company. As AI becomes ubiquitous, a vast tasker underclass is emerging — and not going anywhere.

The Verge

@strangebirds

Yeah, I noticed that. It's beautiful, in a way.

@mhoye how could they call it "model collapse" and not "burger squared"