The ultimate #Google self-own: Google's AI Overview result of "enshittified google" that includes "AI Overview" as an example of enshittified Google.

#enshittification #chatbots #selfown @pluralistic

@pluralistic UPDATE: The AI Overview search result was removed by Google for this particular search a few days after I posted this image.

@Bongolian @pluralistic

Still showing up for me. (But then I tend to be at the trailling end of the adoption curve.)

@cavyherd @pluralistic That's interesting because I've tried on different days, different browsers, including a cookie-free browser, and it's no longer appears for me.

@Bongolian @pluralistic

I wonder if there's a setting that got changed? (Chrome/YT just randomy decided I wanted to view videos in low-res format??)

@cavyherd @pluralistic They are in experimental mode and I've noticed them turning it on and off. But when it seems to be working on other searches it was still not working for me on that search.
@Bongolian @cavyherd @pluralistic If it's any insight, in the past we've found that we can find things on our homebrew server via Google, *but nobody else can*.
@Bongolian @pluralistic
Well done! Cory Doctorow's apt term for the greatest tech ailment of today has gone so viral it's been scraped up and shat out by Google's own AI.
#enshittification

@Bongolian

My Gods... it's become self-aware.

@Bongolian @pluralistic The final entry โ˜ ๏ธ
@Bongolian @pluralistic nooo don't make the thinking rocks self-aware

Presumably, before this shipped, people at Google considered, and had to consciously decide, "What should happen when the LLM would naturally give an unflattering result about Google?"

The example response is a bit of a softball compared to what it could've been (e.g., no mention of the smoking-gun email about consciously sabotaging search quality to increase ad revenue), but still looks like they decided to act like a non-totally-shameless company in how the LLM result looks.

@neilvandyke If the Google execs find out, the first thing they will think is 'Who can we fire?'

@Bongolian
Or they just don't care, they think they're too big to fall now

@neilvandyke

Google Ads VP Emails Chrome & Search Team For Ranking Tweaks & Query Injections From Chrome

A new email was leaked as part of the DOJ investigation, this was from Jerry Dischler, the Vice President of Google Ads. The email was to Prabhakar Raghavan, who leads up all of Google Search includin

Search Engine Roundtable

@Bongolian
That last line though

@greenpete @pluralistic

@econads @Bongolian @greenpete @pluralistic
Yeah it's a good line.

I wonder who wrote it? Shame when these things just lift wholeheartedly they can't give citations.

@Oggie
I'm sometimes using phind.com for work, and the citations it gives so far have seemed legit.

@Bongolian @greenpete @pluralistic

@Oggie @econads @Bongolian @greenpete I wrote it, and it lifted it from me.

@pluralistic @econads @Bongolian @greenpete

I was fairly certain of this, and kinda going for a joke, but didn't quite land it, which is on me.

Not really bringing my a-game I don't think.

@Bongolian @pluralistic Wow. And somehow this is still live (usually by the time I see these kinds of posts, Google has adjusted their results to avoid showing the ones that are problematic for their brand).
@kzeta @Bongolian @pluralistic This is not problematic for google. Nowhere does it cite the author, nowhere does it suggest what could be done about it. This is the information the person came to find. They leave empty handed and minded. Google wins.

@Bongolian @pluralistic Welp. I reloaded my search. Magically, the term "Google" now only appears once in the AI results, in a very tame sentence:
"Some say that Google has been affected by enshittification..."

๐Ÿ™„

@kzeta @Bongolian @pluralistic

When you think about it, LLMs are the ultimate way to censor and manipulate information. You hoover up all the information that other people worked hard to create then you add your layer of censorship and manipulation on top through additional training.

By its nature the evidence of doing is completely un-examinable, since once trained, it becomes a bunch of meaningless encoded numbers in a black box.

@vjprema @kzeta @Bongolian @pluralistic I can see your point but my view I also think that it's important to consider that this is in no way new or specific to LLMs.

I think it's simply a special case of free (libre) vs proprietary software. I have Windows running on my computer and most of its components are almost a black box to me and billions of other users outside of Microsoft - and it has the power to intermediate between me and what I want to access on the computer. The same is true of search engines to which end users have no access to the code base.

So the real argument here isn't a case against LLMs specifically, it's that software should be openly auditable by its users. In the case of LLMs that would mean those who develop and distribute them need to disclose not only the training set but the software and methodology employed.

That doesn't change the other issues with LLMs, but I think it's really important to understand that closed software that can't be audited and hence fully trusted is everywhere.
@kzeta @Bongolian @pluralistic
We need an alternative, honest #Search engine. 
@Quantillion @kzeta @pluralistic AI Overview is honest in its dishonestyโ€ฆ at least in this instance.
@Quantillion Kagi exists
@robnich
I don't know it.
I have all of Guggle uBlocked & use Startpage, but Startpage uses Guggle (just anonymously) & others.