I keep hearing people say AI generated text "will get better” as if that’s a known, accepted fact.

But it's not a known, accepted fact. Massive language model AI text generators have been around for YEARS, and they've always been this bad, this prone to lying, this full of shit. We’re just getting it shoved in our faces now.

“It will get better” is a bet, not a statement of fact, and I bet against it.

Google did not roll this out now because it was good, they rolled it out now because Bing did it first and they panicked. They also fired all the people who had been saying, no, it's not ready yet, even though those people were right then, and they're still right now.

And Bing only rolled it out because they had a search engine nobody gave a shit about and had to try something desperate because they need eyeballs to put ads next to.

Capitalism is why we're all seeing this shit now. Capitalism.

What's amazing to me is that I hear tech skeptics repeating the "it'll get better in the future" line and I'm like why would you think that? It hasn't gotten better recently, or even not-that-recently. It's fundamentally the wrong tool in the wrong place for a search engine. I don't think there's any amount of work that can make it better, even by people who give a shit about truth or providing a good user experience, and there's no evidence that Google and Bing give a shit about those things.

People ask me, what’s it like being right all the time?

And I always tell them, it’s awesome, actually.

https://www.nytimes.com/2024/06/01/technology/google-ai-overviews-rollback.html?unlocked_article_code=1.wU0.D3VD.yfT8H0oCQOSM&smid=url-share

Google Rolls Back A.I. Search Feature After Flubs and Flaws

Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.

The New York Times
@fraying
Wait until it will be trained by AI generated training data /s
@fraying I've read that natural language processing keeps hitting a ceiling right around 80% accuracy because better than that requires semantic/symbolic comprehension, which not only isn't currently possible; we have no idea how to get there. I feel this might be related to why LLM/AI keeps failing so spectacularly - which might explain why I can't find any articles about it with the search engines run by companies not interested in anyone knowing about it...
@jwcph yup. Everything we have now is not intelligence, it’s just a parlor trick.
@fraying If you put profit above function then this is what you get.