News outlets in crisis mode as Google-led AI search push crushes website traffic

Major news outlets are in panic mode as artificial intelligence chatbots push by Google and other Big Tech firms crush website traffic.

New York Post
Which is such a shortsighted move because as soon as all the news portals close shop Google’s scraper will have nothing relevant to summarize and is gonna be shit.
Nothing is stopping the AI summaries from using social media as the primary source
Can’t wait for Google to AI-summarize AI-generated social media posts for artificial Google users created to hike ad prices. It’s gonna be wild
It’s going to get wild
The robots only want to hang out with the robots.
So social media are news outlets now. Good. Glad we cleared that up.
Seems inevitable 😮‍💨
In the context of my original comment, social media companies like Meta and Reddit have fought tooth and nail to not be considered news networks or news outlets specifically because they don’t want to be beholden to the laws that regulate news outlets/networks. Jeopardizing their ineligibility to be sued for what users post (in the US) by going all in on AI LLM’s scrapers when those scrapers rely pretty heavily on news networks and other media to stay useful means they’ll starve themselves of AI scrapes content, and that they’ll potentially forfeit what protections against lawsuits they have. It’s a no win situation for them to continue to bet on AI which has already largely reached the limit of what it’s capable of in current iterations because of the lack of clean organic training data.
I do respect your optimism, but realistically, I doubt they’re forfeiting anything. Fox News broadcasts blatant lies daily. All they had to do was a behind the scenes rebranding.
Fox news isn’t legally considered a news outlet. In fact we have literally seen them admit to not being one in court proceedings.
Yes, correct, that is what I just said.

It’s a huge disaster in the making. There’s probably a game theory model that explains what’s happening. AI is a trap.

At the beginning of LLMs as a consumer tech phase, the big tech companies believed that they would soon generate synthetic data without human labor costs. But synthetic data caused model collapse when used as training data. So they still need humans.

But now their competitors are using scrapers and summarizers and those are expensive so they have to extract more value from search traffic and that means fewer referrals to sites. Fewer referrals to sites reduces original content and leadabto less traffic and training data for their models.

The only two ways out are to create an international regulatory body ornto break up facebook, microspft, and google.

That’s when Google will buy what ever is left of Condé Nast or Buzzfeed at bottom dollar and start using more AI to shit out “news”.
they will be using that opportunity to make up the truth they want.

“Every day, we send billions of clicks to websites, and connecting people to the web continues to be a priority,” a Google spokesperson said in a statement. “New experiences like AI Overviews and AI Mode enhance Search and expand the types of questions people can ask, which creates new opportunities for content to be discovered.”

They followed up with: “You can totally trust me, and everything I just said. I am absolutely definitely not lying.”

With a link to NYPost lol
Most news sites ask me to consent to google tracking or pay, neither of which am I prepared to do in almost all cases. Why I do everything I can to avoid google tracking shouldn’t need explaining. The idea that I will pay to be propagandised is 20th century.
they seem to be walking that back somewhat, moving it to its own tab.
People really trust it? Like it’s been so wrong on things for me, I automatically skip to search results past it. Why bother anymore
I use extensions to block AI results… having to skip past them is annoying
Yup, I’ve been seeing more and more people straight use AI results to support their arguments.
I know several people in my community who confidently trust it for search. they are not stupid people… but sometimes I question their choices

google search has been trash even before llms, i believe its one of the reasons chatgpt got so much traction. first 2 pages are just generic slop with paywalled or ad-infested content.

i switched to kagi 2 years ago and its has relevant results on page 1 that i won’t get in google even after 10s of pages. plus i don’t bombed with ads for that term for weeks.

they purposefully fucked yup they search so people would have to click more pages to find their answer, giving google a chance to display more ads.
I have a different theory. Google started scrapping Reddit since its gone public, so I think this is a strategy to get people to use [search query] + reddit in order to find answers, so that Google can scrape that data to train their AI
this isn’t a theory of mine, it came out in leaked testimony. totally intentional enshittification for profit
The Man Who Killed Google Search

Wanna listen to this story instead? Check out this week's Better Offline podcast, "The Man That Destroyed Google Search," available on Apple Podcasts, Spotify, and anywhere else you get your podcasts. UPDATE: Prabhakar has now been deposed as head of search, read here for more details. This is the story

Ed Zitron's Where's Your Ed At
Depends on what it is. As a reference lookup for a simple programming function and with an example, it’s been a game changer.
Yeah, asking an LLM simple stuff that Google used to just give you at the top of the search before the AI Overview was a thing, like how old is X celebrity or for a high level example of code, or as a stupidly complex spell checker it is pretty good.
Usually I would be digging through some old forums for examples from 10 years ago. Or searching stack exchange. Personally, my usage is scripting and programming sites has totally deminish.
I still find it weird that people don’t use Kagi for search. At least start page or ddg. Google hasn’t been useful for five ish years and admitted in court that they damaged results to prop up ads.
I used Kagi for a few months when other engines failed. It did come through a few times. But paying $5 a month to get one extra good search result per month was a hard sell for me. If they offered a much much smaller package of lime 20-50 searches per month or just pay as you go, I’d definitely be in.
Good news for you, Kagi just made the first 50 searches free for anybody without creating an account.
Yeah that’s what I used for the first 6 months to try it out.

People on Twitter regularly go “@grok is this true” to everything and trust the AI to be correct.

The same AI that said the fresh photo of National Guard members sleeping on the floor was from 2021…

people still on twitter are complete, brain washed idiots… so this behaviour tracks
Google wants it all this time. No traffic for anyone but them after they steal all your content.
They saw what AOL tried to do and decided they can make it work.
AOL? Like the company that was always giving away those free coasters and frisbees?
“gonna be”? Google has been circling the drain for years, and “AI” summaries were always shit.

Watch his recent interview with Nilay Patel from The Verge. Watching him dance around questions about this was painful.

This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.

AI stands for Ambitious Indian

(Just plagiarizing the joke about that company that went bankrupt)

This man only cares about increasing Alphabet stock prices to ensure as large a golden parachute as possible on the way out.

This Is literally his legal obligation, welcome to capitalism

No, making money is not a legal obligation. CEO at my last job told the board, two years in a row, that intended to lose money so we could invest in our people and tech. They cheered him.

there are many ways to fulfil this “obligation”… i’d argue that he’s increasing alphabet stock price in the short term but what the fuck is going to happen when the sources all go out of business?

… oh right they’re going to become a news monopoly… cool cool cool

The sad part is this is actually his job description and Alphabet could get sued by shareholders if he didn‘t do exactly that. The stock market needs to be criminalized, not glorified as the one truth like it‘s treated right now.
You described all executives everywhere.
Wouldn’t this kill their ad revenue? Which is like…most of their revenue?
Ad supported articles is a dead industry, Google realizes this better than anyone. People don’t go to the source anymore to answer curiosities, why would you read a whole article to answer a simple question when AI gives you the answer directly?

why would you read a whole article to answer a simple question when AI gives you the answer directly?

context?, nuance?, verifying the AI slop?

The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?

Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article about elegance in formal wear just to get to the instructions on where to place the measuring tape on your shoulder? I MUCH prefer the AI summarized response.

Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.

The last thing I googled is how to measure dress shirt size. Do you need context and nuance for everything you Google?

if AI is answering, yes.

Do you prefer to click on the seo optimized first page results that are full of ads and read through a nonsense article…

No, but that’s not what i claimed so you can have your strawman back

Most of the Internet is NOT intellectual writing, it’s blog spam to answer your daily curiosities and practical needs. A sufficienty trained model is a really good (and environmentally friendly) alternative.

Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce

Let me know when we get one. In the meantime, enjoy your thick, glue riddled, pizza sauce

What? That’s just stupid, like I’m not remotely claiming they are intelligent, but to dismiss their utility completely is just idiotic. How long do you think the plug your ears strategy will work for?

Pick any model that has come out this year and ask if my example query or any similar daily curiosity you would Google, and show me how it gives you “thick, glue riddled, pizza sauce”. Show me a single gpt 3.5 comparable model that can’t answer that query with sufficient accuracy.

if AI is answering, yes.

You’re being obtuse. You don’t need nuance in trying to figure out what size collar you should buy.

but to dismiss their utility completely is just idiotic.

not what I said at all. I simply stated AI answers cannot be trusted without verifying them which makes them a lot less useful

You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.

If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it. You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length.

Am I making sense? If the model starts giving people bad answers, people will notice when reality hits them in the face.

So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.

You’re moving the goalposts. You said you need nuance in how to measure a shirt size, you’re arguing just to argue.

I said I needed context to verify AI was not giving me slop. If you want to trust AI blindly, go ahead, I’m not sure why you need me to validate your point

If a model ever starts answering these curiosities inaccurately, it would be an insufficient model for that task and wouldn’t be used for it.

And how would you notice unless: you either already know the correct answer (at least a ballpark) or verify what AI is telling you?

You would immediately notice this is a bad model when it tells you to measure your neck to get a sleeve length

What if it gives you and answer that does not sound so obviously wrong? like measuring the neck width instead of circumference? or measure shoulder to wrists?

So I’m making the assertion that many models today are already sufficient for accurately answering daily curiosities about modern life.

And once again I tell you that you can trust it blindly while I would not and I will add that I do not need another catalyst for the destruction of our planet so I can get some trivia questions answered. Given the environmental cost of AI, I would expect a significant return, not just a trivia machine that may wrong 25% of the time

I meant why would people advertise on Google if it won’t convert to clicks anymore?
They won’t, and I’m saying Google knows that their Advertising cash cow is running out of milk.
Oh no… Not the major news outlets, how ever will we cope. Let them burn.
Then how would I know the 10 surprising things I can do to be healthier? /s