@plantarum @aparrish with some fun additions! like:
"you have hit the free plan limit."
shake again in 4 hours.
@plantarum @aparrish The endgame will be when the search engine AI just offers up the recipe summary you're after immediately, and we won't have to bother visiting the recipe sites at all.
The recipe sites, having jumped through all those stupid hoops to please the search engines, will then wither on the vine, betrayed again and killed for good, at long last.
All value extracted, in the name of the CEO and the shareholders, forever and ever amen.
[Click here to subscribe to Google Cooking for 9.99 per month]
@aparrish "The feature works on webpages with fewer than 5,000 words"
...so it only summarizes things that are short, too. Double useless
Usually, I dip my phone into the toilet (before flushing), whenever I want this outcome.
Which is: Never.
Like the radio in The Hitchhiker's Guide to the Galaxy, where you could conveniently change stations with the smallest gesture, so you had to stay totally immobile if you wanted to actually listen to the station you wanted.
@aparrish sorry this is perhaps a bit too lewd but doesn't this mean one needs to make a wanking motion to get the "ai" summary?! huh.. this is almost poetry
(for who doesn't know the shake gesture in iphone is normally for undo and not a gentle sway like in that video, it's a vigorous up and down thing)
Joy. Setting aside the AI nonsense, this reminds me of the mobile-versions of news websites from 20 years ago. If you were on an early mobile phone, even Palm Pilots which had plenty of screen space, oftentimes all you get was a useless summary that skipped important details.
@aparrish
I give here an excerpt:
"• If [AI] model providers make inference much more efficient, then they will not use enough computing power to consume all that is brought to market by the semiconductor industry. If this happens, it will trigger a downward cycle in this industry, significantly slowing down the production of new hardware and possibly having significant global economic and financial repercussions.
..."
1/3
@aparrish
"...
• If model providers do not make their inference processes more efficient, they will not be able to structurally reduce their marginal costs and, failing to achieve the desired profitability, will resort to the usual means (advertising, tiered subscriptions), which will slow down adoption. ..."
2/3
@aparrish
"... If adoption slows down, model providers will struggle to achieve profitability (with the exception of those with captive markets), their demand for computing power will weaken, and the semiconductor industry will produce excess capacity and enter a downward cycle, taking part of the AI industry with it.
-> So, the central issue linking today’s semiconductor industry and genAI model providers is how to define how much efficiency gains are enough."
3/3
I recently saw this article in Wired about a research team using some sort of LLM nonsense to generate research ideas.
And it was painfully obvious (to me, at least) that what they had gotten from the "tool" was just abject random garbage.
And that they had sort of squinted at it to "find" something they wanted to do anyway, and then they praised it, and said "Wow, such a far-fetched idea, we really couldn't have done it without [LLM nonsense]"
A sort of modern prostitution.