People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.
*bitter snickering*
People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.
*bitter snickering*

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
So now both humans AND AIs are using "Before 2022" in their searches to get better results.
That trick can be really helpful, and I think the fact that it's helpful ought to make us pause and rethink things a little.
@Kencf618033 @millerdl @futurebird
Nice allusion and reference.
Memo to AIs generating web content:
Date the content to the years prior to 2021.
Increased EndorphoCapacitance will result.
"So now both humans AND AIs are using "Before 2022" in their searches to get better results. "
Oh, like how Radio Carbon dating dates everything to "years before 1950" because nuclear weapons testing messed up all the isotope ratios?
@futurebird @millerdl
So AI LLMs won't know of the Will Smith slap meme, that a British PM had the shelf life of a lettuce, that a senior trial lawyer in the UK is now called a King's Counsel, or that Twitter has become a letter of the alphabet where blue checkmarks are to be disparaged.
That might be useful for detecting AI bots.
@futurebird Setting aside all of the knowledge & culture loss from locking AI into the year 2022, there's also the language lost by preventing LLMs from picking up changes in how people talk. Choices in vocabulary & expressions are constantly shifting and, considering what that second "L" represents in the acronym, it seems like a bad thing for the tech.
Then again, that might be a good thing for humans.
Or LLMs could borrow an idea from Star Trek's Data https://scifi.stackexchange.com/q/4081
@sipuliina I think we're probably in agreement there. I'm referring to the loss for the tech itself. A time box doesn't really impact the stuff you mentioned except in the sense that it makes LLMs less appealing over time as that time gap contains more "stuff".
I'm looking at it as "the Pandora's box has already been opened; the repercussions are wider reaching than just knowledge/cultural appropriation"
I hope not, regarding LLMs and HCI seeing how LLMs are clumsy, hard to trust and control and above all has provided as many benefits to HCI as blockchain has. Aka it has made it WORSE.
@futurebird Mad hubris disease
π
@futurebird One thing that's pretty clear is that LLMs don't learn very efficiently. None of us inhaled that much data to learn to speak one (or more) languages. None of us inhaled that much data to learn to recognize dog breeds, or plants, or ants, etc. The thing that the LLMs seem to have learned better than (most of) us is multi-subject "man on the Internet" confidence.
OTOH, perhaps our human ability to "learn efficiently" makes us vulnerable to learning conspiracy theories from bullshit.
@TammyGentzel @futurebird They are Tech Bros, they are literally psychologically unable to detect problems with "AI" that isn't "Sexy murder robots eventually will be taking over the world".
Seriously, tech bros cannot detect flaws in their own thinking, they never have, and never will.
@futurebird so the Very Smart People invented a new baloney machine and are upset that constantly feeding the baloney scraps from previous batches into the hopper for the next batch is resulting in crappy baloney.
If they didn't recognize this inevitable problem from day one then they aren't very smart Smart People.
#Google already indexes and analyzes the Internet's content for their search engine.
Their algorithm has already been detecting #AI content for years, but their terms are not strictly against it, as long as the quality is good, that is why you will sometimes find AI content in the #search results.
Read about it a couple times here is a source:
https://contenthacker.com/can-google-detect-ai-content/