People trying to train AIs are now complaining that all of the AI data on the internet are making it hard for them to get quality training sets of natural language and images.

*bitter snickering*

@futurebird see also this paper https://arxiv.org/abs/2305.17493 on how training on generated data can cause big problems
The Curse of Recursion: Training on Generated Data Makes Models Forget

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.

arXiv.org

@millerdl

So now both humans AND AIs are using "Before 2022" in their searches to get better results.

That trick can be really helpful, and I think the fact that it's helpful ought to make us pause and rethink things a little.

@futurebird I like to think of it as these models sowing the seeds of their own destruction, which I find a little satisfying.
@millerdl @futurebird sadly, they are taking the web with them. That was kind of a cool thing.
@TofuTheSquirrel @millerdl @futurebird more and more, excluding AI from spaces with no exceptions is going to be a high value selling point.
@TofuTheSquirrel @millerdl @futurebird
I'm not sure what we will miss though, #people did spend a lot of time on the #web with useless things like #entertainment, self marketing or analyzing/sorting #information as a job , the most useful things like information gathering or tools will just be all in one and more efficient than ever.
@TofuTheSquirrel @millerdl @futurebird
#Communities and #People talking to People and exchanging #ideas #online will probably continue just with more #verification (really not sure how this will look or what the best options that are bullet proof & still #private are) and or bot content detection to ensure real users saying human things to each other.
@millerdl @futurebird
The modern Cadmus sows dragon's teeth and von Neumann machines spring up which get smaller and smaller and smaller.
@millerdl @futurebird this thought happened in my mind too..
@futurebird @millerdl Reminiscent of salvaging ante atomic test steel.
@Kencf618033 @futurebird @millerdl Yes - I keep thinking of pre-1945 steel when people talk about AI poisoning of the internet

@futurebird @millerdl

Memo to AIs generating web content:

Date the content to the years prior to 2021.
Increased EndorphoCapacitance will result.

@skua @futurebird @millerdl
plays hell with the magneto reluctance, though.

@futurebird @millerdl

"So now both humans AND AIs are using "Before 2022" in their searches to get better results. "

Oh, like how Radio Carbon dating dates everything to "years before 1950" because nuclear weapons testing messed up all the isotope ratios?

@futurebird @millerdl In the 1990s after the fall of the Soviet Union, there was talk of the 'end of history'. But perhaps 2022 is really the end of history in the sense of our ability to easily and reliably document it.

@futurebird @millerdl
So AI LLMs won't know of the Will Smith slap meme, that a British PM had the shelf life of a lettuce, that a senior trial lawyer in the UK is now called a King's Counsel, or that Twitter has become a letter of the alphabet where blue checkmarks are to be disparaged.

That might be useful for detecting AI bots.

@futurebird Setting aside all of the knowledge & culture loss from locking AI into the year 2022, there's also the language lost by preventing LLMs from picking up changes in how people talk. Choices in vocabulary & expressions are constantly shifting and, considering what that second "L" represents in the acronym, it seems like a bad thing for the tech.

Then again, that might be a good thing for humans.

Or LLMs could borrow an idea from Star Trek's Data https://scifi.stackexchange.com/q/4081

Why could Data not use contractions?

I was wondering if there was an explanation for why Data from ST:TNG couldn't use contractions?

Science Fiction & Fantasy Stack Exchange
@danherbert @futurebird How is it a loss of any kind if AI companies can't steal our knowledge, culture and language as efficiently as they would like?

@sipuliina I think we're probably in agreement there. I'm referring to the loss for the tech itself. A time box doesn't really impact the stuff you mentioned except in the sense that it makes LLMs less appealing over time as that time gap contains more "stuff".

I'm looking at it as "the Pandora's box has already been opened; the repercussions are wider reaching than just knowledge/cultural appropriation"

@millerdl @futurebird There will be a nasty second order effect because humans now get a lot of their information from the web, so human interaction will be contaminated too.
@futurebird someone called the phenomenon of LLMs training on each other's data "Habsburg AI" and it makes me very happy
@futurebird LLMs are simultaneously the future of HCI and have genuine understanding of the prompts they produce... And are flooding the internet with meaningless regurditated garbage that's making it harder to train other LLMs.

@zanzi @futurebird

I hope not, regarding LLMs and HCI seeing how LLMs are clumsy, hard to trust and control and above all has provided as many benefits to HCI as blockchain has. Aka it has made it WORSE.

@futurebird One thing that's pretty clear is that LLMs don't learn very efficiently. None of us inhaled that much data to learn to speak one (or more) languages. None of us inhaled that much data to learn to recognize dog breeds, or plants, or ants, etc. The thing that the LLMs seem to have learned better than (most of) us is multi-subject "man on the Internet" confidence.

OTOH, perhaps our human ability to "learn efficiently" makes us vulnerable to learning conspiracy theories from bullshit.

@futurebird pretty sure there's trained professionals working on this problem, the advantage gained by solving it would be huge.
@dr2chase @futurebird mankind’s greatest strength is recognizing and learning patterns. Unfortunately, it’s also our greatest weakness.
@dr2chase @futurebird Uhhhh, how many years did it take your "efficient" monkey brain to learn language? LLMs may need tons of data, but they make sense of it in days of training, not years. Also, you had far more data feeding your learning than any LLM that has ever existed. That data just didn't seem like data to you. It seemed like "listening to your mom" and "watching TV".
@blterrible @futurebird The larger LLMs have as much text fed to them as thousands of humans could read in their thousands of lifetimes. Your claim fails simple arithmetic.
@dr2chase @futurebird Text is not the only way humans learn language. ChatGPT never had a mother lean over it's crib and coo at it and yet babies start learning language that way very early on. ChatGPT was not trained on all the episodes of Gilligan's island and has little mapping between the usage of hats to represent roles and characters, yet all that maps to language as well. An image captioned "You're dead!" conveys no meaning to an LLM, and little to you without the non-text image.

@futurebird

wait, you're telling me...

the well we've been pissing into, now has piss in it?

@futurebird wait. I was assured by the AI tech bros that once the AIs started training AIs, we would zoom off to the singularity.
@tob @futurebird Yeah, that's when everything breaks, you can't turn on the TV or buy anything, and you starve in your home because the locks no longer work. Just like, well, almost like, well, sort of like Kurzweil predicted.
@fgbjr @tob @futurebird
couple decades ago Bruce Sterling gave a Long Now talk about the Singularity entitled "Your Future as a Black Hole." Much of it holds up disturbingly well, with solid applications to Web 2.0, Web3, Crypto, AI, etc.
https://longnow.org/seminars/02004/jun/11/the-singularity-your-future-as-a-black-hole/
Bruce Sterling: The Singularity: Your Future as a Black Hole - The Long Now

@FeralRobots @tob @futurebird I clicked through, and the site tells me my phone is too old to play the podcast. That's a shame, but also kind of the point I guess.
@fgbjr
especially given one of his quiet obsessions around that time & since has been something he calls "dead media", which is kind of self-explanatory. He'd get a chuckle out of it.
@tob @futurebird
@FeralRobots @tob @futurebird The Singularity has been cancelled. In pictures:
@tob @futurebird
We are. Look, the LLM training are destroying good content on the internet, isn't that singular ?
@futurebird It seems like more people in AI high places should have foreseen the problem. I know many of us down here with the plebes thought it would have an impact.
@TammyGentzel @futurebird For them it's good i suspect, this takes care of the competition while they can already detect #Ai #content πŸ˜‹

@TammyGentzel @futurebird They are Tech Bros, they are literally psychologically unable to detect problems with "AI" that isn't "Sexy murder robots eventually will be taking over the world".

Seriously, tech bros cannot detect flaws in their own thinking, they never have, and never will.

Who could have ever foreseen this... 🀦

@futurebird so the Very Smart People invented a new baloney machine and are upset that constantly feeding the baloney scraps from previous batches into the hopper for the next batch is resulting in crappy baloney.

If they didn't recognize this inevitable problem from day one then they aren't very smart Smart People.

@futurebird The DataKrash is today. No need for a Rache Bartmoss, the Web is eating itself thanks to the AI bros. We just need to safeguard what data is truely valuable to help rebuild a better Net afterwards. Let's just begin to build a Blackwall to hold the rogue AIs at bay, and we'll be fine without them.
@futurebird The main players have a big advantage, #Google can already detect #AI #content because they have been training #algorithms for so long the small players don't have that advantage. I would suggest using data from before #ChatGPT became popular with end-consumers. The good thing for small AI companies is, they don't get Robot.txt & #Ip blocked (i think >15% of major sites are blocking main AI scrapers) so they still have access to those data pools which are also guaranteed not to be AI
@madeindex @futurebird
Afaik chatgpt content can't be discriminated from "natural content". I would like a source for that. Also having a model for that would be quite energy consuming/raise costs for just indexing/finding proper training data. Also lots of content is also hybrid. So Im not convinced by that argument.

@Zeugs @futurebird

#Google already indexes and analyzes the Internet's content for their search engine.
Their algorithm has already been detecting #AI content for years, but their terms are not strictly against it, as long as the quality is good, that is why you will sometimes find AI content in the #search results.

Read about it a couple times here is a source:
https://contenthacker.com/can-google-detect-ai-content/

Can Google Detect AI Content? Here's What You Need to Know

Can Google detect AI content? Yes - but Google's revised E-E-A-T guidelines and Danny Sullivan's take on AI content creation have changed the game.

Content Hacker
@madeindex @futurebird
The open ai classifier is no more and the article is from Feb 2023! There has not been that much improvement that's true.
But just because google says you can use AI content it does not mean that they can detect it. With this definition they don't have to even check. If you would put all Google crawled content through GPT detector it would be quite expensive and these classifiers things never worked reliable.
@Zeugs @futurebird
I would argue #AI detection is already a part of what #Google does and requires no extra step, as they do Natural Language Processing to understand the content anyway.
They can even understand text in images and the images themselves via #OCR (which requires much more computing power).
Article #Spinning (AI rewritten content) has been around for long & i think they are very good at detecting it, they just don't seem to want to remove it.
April 3 2024:
https://nealschaffer.com/can-google-detect-ai-content/
@madeindex @futurebird That's my concern, too. The internet could (should) have been all of us collectively participating for fun, our shared experiences providing the growth medium to make useful tools for everyone. If the internet itself isn't a viable place for shared data, then we're beholden to large companies that can afford to make their own data. The internet has unleashed nightshade in the communal garden, but the giants can afford to move to their own greenhouses.