Theory (which is mine) (and what it is, too): Enshittification has led directly to acceptance of LLMs, because the public is already used to software that is unfit for purpose.

@pluralistic

@wendynather @pluralistic and it simply overshadows all of the things this technology can be useful for.
@wendynather @pluralistic and: already used to work in bullshit jobs
@wendynather @pluralistic
Yep. Search engines too: people used to search for an answer, now they ask an LLM. They didn't start doing it because they wanted a machine to think for them, they started because they stopped getting useful search results.
@TheGreatLlama @wendynather @pluralistic
They started because they were too stupid to switch off of Google to something else that was actually reliable.
@mtconleyuk What are these actually reliable alternatives? Because everything I've tried so far has been just proxying Google or Bing results, or has its own search index that's just as unusable, just in different ways
@das_robin @thegreatlama @wendynather @pluralistic
I've been using Kagi for a couple of years now and am satisfied with the results. They do offer AI slop, but I can disable all of it, and that's fine. Price is reasonable, about $5/month for my number of queries (there are cheaper tiers).
@mtconleyuk True, Kagi was pretty good when I gave it a try. But the owner's general conduct and stance on privacy specifically really made me not want to keep paying money for that, unfortunately
@mtconleyuk But also, I wanna add, saying everyone who doesn't switch away from Google is stupid and then offering a paid option as an alternative seems a little insulting to people who can't afford to easily shell out an additional couple of bucks per month
@das_robin I can understand that, and sympathise. I would probably be on DDG if I couldn’t pay for Kagi. I just think that most people never even think about what it is Google do to them and their data, and are sufficiently incurious to never explore any alternatives.
@mtconleyuk Fair. But even then I'd hesitate calling them "stupid". I'd assume most of them are simply uninformed and/or don't have the necessary mental bandwidth and time to care about understanding the issue, let alone look for solutions, next to all the other shit they already have to deal with in their lives. My point is, empathizing will usually go much further than antagonizing :)
@das_robin
Well, fair enough.
@mtconleyuk @wendynather @pluralistic
I mean... That's a nice thought, but I regularly cycle through every option on the market. None of them work remotely as well as ca. 2012 Google. LLMs aren't a useful alternative, but there aren't any good options.
@TheGreatLlama @mtconleyuk @wendynather @pluralistic It is staggeringly rare to find a search engine that will search exclusively for the literal exact term you entered. This certainly used to be the default.

@DamonWakes @TheGreatLlama @mtconleyuk @wendynather @pluralistic

Yeah it's on purpose to force you to use LLM which nobody wants

However there are some topics where it's helped me find research papers. Cases where it's hard to pin down a good set of terms that are very generic

@crowdotblack @TheGreatLlama @mtconleyuk @wendynather @pluralistic It's been going on long enough that it can't have started for the sake of LLMs. I suspect it's just accommodating sloppy searches at the expense of precise ones - not unwelcome where I don't know an exact term, but hugely frustrating when I do (and none of the results contain it).
@DamonWakes @crowdotblack @TheGreatLlama @mtconleyuk @wendynather @pluralistic Are there any addons/plugins/monkey scripts recommended for filtering? Not looked properly - figured if there was something there would be lots of coverage. Seems like something mechanical previewing and filtering results would do the trick and remove the need for AI scanning the results
@Moray @DamonWakes @crowdotblack @mtconleyuk @wendynather @pluralistic
I would doubt it: the whole intent with how they're serving up the garbage is that it isn't easily distinguished from useful content. The signal to noise ratio is just too low.

@TheGreatLlama
And the SEO'd webpages you get bury the info you want in a ton of ads and other BS.

Currently the AI answer is quick, easy, and not yet enshittified.

So what are people going to choose?

@wendynather @pluralistic

@TheGreatLlama

@wendynather @pluralistic the sad truth is that most of the time LLMs do give you a more useful answer than a search results page that's heavily games by SEO slop and stuffed full of ads. And when it fails to do so, it usually fails gracefully such that you still feel as somewhat satisfied with it. Which is what makes it insidious, because the shortcomings are less obvious.

@TheGreatLlama @wendynather @pluralistic

β€œTired of trying to slake your thirst for knowledge at the poisoned well? Why not try this poisoned chalice instead! It’s more ergonomic and convinced it’s right!”

@TheGreatLlama @wendynather @pluralistic

There's also this idea that LLMs are easier and faster. Laziness has something to do with it.

@raindrops_and_roses @wendynather @pluralistic
I get the logic: when you only get maybe one or two remotely relevant search results per page, it's easy to think, "Hey, wouldn't it be great if there was a machine to sort through these for me."

Of course, there already WAS a machine to do that before Google decided it would be better to use it to feed you advertising.

@wendynather @pluralistic The unfit for any purpose class of software predates e14n, with Microslop bravely pioneering the way in desensitization training.
@wendynather @pluralistic I agree, and LLMs, for now at least, don't have messy ad ridden interfaces. To non ad blocking users it can be easier than scrolling post a page of ads to get an answer, albeit a likely wrong answer. The enshitified web paved the way for a shitty text extruder.
@marbletravis @wendynather @pluralistic I use qwant.com and an ad blocker, and my web search experience is how google used to be: just links to pages that might help me.

@wendynather

"AI search assist" also gives people a quick answer that isn't buried in ads (yet).

So enshittification of the broader web gives people an immediate reward for choosing AI, and that pushes them past their initial distrust of LLMs.

Consumer AI is just earlier on its own enshittification wave, though.

@pluralistic

@wendynather That theory, which is yours - which is to say, is your own - bears consideration. There is very good software available, but there would also be a significant cognitive hurdle getting everyone running Unix clones.

It would be interesting charting hallucination engine acceptance according to choice of OS, or among computer geeks, choice of computer language.

@wendynather @pluralistic No. For real.

I don't like to harp on this story because the person in it is "stupid" because they're not. They don't know computers.

Ran into a lady getting ripped off for hours by her boss. All the ladies are so they were talking about what the use to track. One said it was AI.

I said you have to watch that because they're bad at stuff like that and it'll get stuff wrong.

"Oh yeah! All the time!"

They don't know and they're being lied to.