@luis_in_brief @april Yeah I feel like using LLM to aggregate search results is actually one of the few GOOD uses of modern LLM's (as long as you're not using it for anything critical like health advice)
This just leaves you with the bad uses of LLM, like news sites replacing actual journalists with chatGPT
@austinphilp @luis_in_brief @april The summaries are a problem as well, even if accurate; it's still parasitic, and killing its hosts. I saw this pointed out today:
It honestly breaks my heart to write this article, but I want to be as transparent as possible with our readers because you are the ones that have quite literally kept our lights on over the past five years, and you deserve to know the truth about whatâs happening behind the scenes, so here it [âŠ]
@foolishowl @luis_in_brief @april Very fair point - though I'd argue this is just a continuation of an existing problem
Google already provided summaries for search results before LLMs that reduced click-throughs, and many top results these days are basically just copy-pastes of the actual source (sometimes without even a link to the original
@foolishowl @austinphilp @luis_in_brief @april
Hadn't heard of RetroDodo, seems like a great site.
Google really has done incredible damage to its own main business, and it shows why FOSS, RSS, and the fediverse is only going to become more important.
@corhen @foolishowl @luis_in_brief @april "Google really has done incredible damage to its own main business"
Well, this is the crux of the issue in a nutshell - they've done incredible damage to their core product, but because of the effective monopoly they've built for themselves, their actual business has never been better (they saw a 40% bump in profits last quarter).
And ultimately the way corporate America works, that's literally the only thing that matters to them
@april
Sssshhhhh...
;)
@april I assume this is because LLMs are actually very expensive to run and opperate at a loss. they've been free/cheap for a while in hopes of selling people on the technology and getting people to train their LLMs for them. I wonder if this is a subtle admission that this AI shit is unsustainable
the thing about LLMs is that most people don't want to deal with the consequences of them. They had to be forced on people
but you can't do that and demand payment to use them