People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.
*bitter snickering*
People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.
*bitter snickering*

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
So now both humans AND AIs are using "Before 2022" in their searches to get better results.
That trick can be really helpful, and I think the fact that it's helpful ought to make us pause and rethink things a little.
@Kencf618033 @millerdl @futurebird
Nice allusion and reference.
Memo to AIs generating web content:
Date the content to the years prior to 2021.
Increased EndorphoCapacitance will result.
"So now both humans AND AIs are using "Before 2022" in their searches to get better results. "
Oh, like how Radio Carbon dating dates everything to "years before 1950" because nuclear weapons testing messed up all the isotope ratios?
@futurebird @millerdl
So AI LLMs won't know of the Will Smith slap meme, that a British PM had the shelf life of a lettuce, that a senior trial lawyer in the UK is now called a King's Counsel, or that Twitter has become a letter of the alphabet where blue checkmarks are to be disparaged.
That might be useful for detecting AI bots.
@futurebird Setting aside all of the knowledge & culture loss from locking AI into the year 2022, there's also the language lost by preventing LLMs from picking up changes in how people talk. Choices in vocabulary & expressions are constantly shifting and, considering what that second "L" represents in the acronym, it seems like a bad thing for the tech.
Then again, that might be a good thing for humans.
Or LLMs could borrow an idea from Star Trek's Data https://scifi.stackexchange.com/q/4081
@sipuliina I think we're probably in agreement there. I'm referring to the loss for the tech itself. A time box doesn't really impact the stuff you mentioned except in the sense that it makes LLMs less appealing over time as that time gap contains more "stuff".
I'm looking at it as "the Pandora's box has already been opened; the repercussions are wider reaching than just knowledge/cultural appropriation"