People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.

*bitter snickering*

@futurebird see also this paper https://arxiv.org/abs/2305.17493 on how training on generated data can cause big problems
The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

arXiv.org

@millerdl

So now both humans AND AIs are using "Before 2022" in their searches to get better results.

That trick can be really helpful, and I think the fact that it's helpful ought to make us pause and rethink things a little.

@futurebird Setting aside all of the knowledge & culture loss from locking AI into the year 2022, there's also the language lost by preventing LLMs from picking up changes in how people talk. Choices in vocabulary & expressions are constantly shifting and, considering what that second "L" represents in the acronym, it seems like a bad thing for the tech.

Then again, that might be a good thing for humans.

Or LLMs could borrow an idea from Star Trek's Data https://scifi.stackexchange.com/q/4081

Why could Data not use contractions?

I was wondering if there was an explanation for why Data from ST:TNG couldn't use contractions?

Science Fiction & Fantasy Stack Exchange
@danherbert @futurebird How is it a loss of any kind if AI companies can't steal our knowledge, culture and language as efficiently as they would like?

@sipuliina I think we're probably in agreement there. I'm referring to the loss for the tech itself. A time box doesn't really impact the stuff you mentioned except in the sense that it makes LLMs less appealing over time as that time gap contains more "stuff".

I'm looking at it as "the Pandora's box has already been opened; the repercussions are wider reaching than just knowledge/cultural appropriation"