It’s a huge disaster in the making. There’s probably a game theory model that explains what’s happening. AI is a trap.
At the beginning of LLMs as a consumer tech phase, the big tech companies believed that they would soon generate synthetic data without human labor costs. But synthetic data caused model collapse when used as training data. So they still need humans.
But now their competitors are using scrapers and summarizers and those are expensive so they have to extract more value from search traffic and that means fewer referrals to sites. Fewer referrals to sites reduces original content and leadabto less traffic and training data for their models.
The only two ways out are to create an international regulatory body ornto break up facebook, microspft, and google.
“Every day, we send billions of clicks to websites, and connecting people to the web continues to be a priority,” a Google spokesperson said in a statement. “New experiences like AI Overviews and AI Mode enhance Search and expand the types of questions people can ask, which creates new opportunities for content to be discovered.”
They followed up with: “You can totally trust me, and everything I just said. I am absolutely definitely not lying.”
google search has been trash even before llms, i believe its one of the reasons chatgpt got so much traction. first 2 pages are just generic slop with paywalled or ad-infested content.
i switched to kagi 2 years ago and its has relevant results on page 1 that i won’t get in google even after 10s of pages. plus i don’t bombed with ads for that term for weeks.

Wanna listen to this story instead? Check out this week's Better Offline podcast, "The Man That Destroyed Google Search," available on Apple Podcasts, Spotify, and anywhere else you get your podcasts. UPDATE: Prabhakar has now been deposed as head of search, read here for more details. This is the story
People on Twitter regularly go “@grok is this true” to everything and trust the AI to be correct.
The same AI that said the fresh photo of National Guard members sleeping on the floor was from 2021…
Watch his recent interview with Nilay Patel from The Verge. Watching him dance around questions about this was painful.
This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.
AI stands for Ambitious Indian
(Just plagiarizing the joke about that company that went bankrupt)
This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.
This Is literally his legal obligation, welcome to capitalism
there are many ways to fulfil this “obligation”… i’d argue that he’s increasing alphabet stock price in the short term but what the fuck is going to happen when the sources all go out of business?
… oh right they’re going to become a news monopoly… cool cool cool
why would you read a whole article to answer a simple question when AI gives you the answer directly?
context?, nuance?, verifying the AI slop?
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article about elegance in formal wear just to get to the instructions on where to place the measuring tape on your shoulder? I MUCH prefer the AI summarized response.
Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?
if AI is answering, yes.
Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article…
No, but that’s not what i claimed so you can have your strawman back
Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce
What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?
Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.
if AI is answering, yes.
You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.
but to dismiss their utility completely is just idiotic.
not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.
Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.
I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point
If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it.
And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?
You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length
What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?
So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.
And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time