I've been talking about how AI will directly lead to Idiocracy (2006) scenarios (like Brawndo has what plants crave, Charlie Chaplin leading the 3rd Reich, etc) for some time, but today's update to the "can you melt eggs?" saga is as clear an illustration of how as I think it's possible to ever have.

Quora's AI answers made up the melting point of eggs, and then Google picked it up and responded affirmatively that you can indeed melt eggs.

Then people wrote articles about how stupid it is that Google says eggs can melt. Then Google fixes the answer.

Then Google ingests an article about how stupid it is that Google says you can melt eggs, and suddenly Google starts answering affirmatively again that you can melt eggs, citing the article about how stupid Google is for thinking you can melt eggs.
It's basically the knowledge version of the Grey Goo scenario.

From grey goo to the grey lady.
Gray goo - Wikipedia

@nyquildotorg I remember in university when internet sites were not considered primary sources for research, because we understood that anyone could publish anything.

Now it publishes itself

@gnuplusmatt @nyquildotorg If Wikipedia keeps a human in the loop at all times, there's a good chance for it to become the authoritative source, although human bias will then always creep in.

@tanepiper There's a good amount of bots on Wikipedia, but fortunately those mainly focus on fixing metadata related issues.

@gnuplusmatt @nyquildotorg

Wikipedia:List of citogenesis incidents - Wikipedia

@tanepiper @gnuplusmatt @nyquildotorg If AI is trained with texts written by humans, human bias also creeps in. We need an AI trained / raised / educated without human influence...
@nyquildotorg : Grey Goo makes Soylent Green seem rather... yesterday's leftovers.
@nyquildotorg I’ve heard a few people describe it as informational gray goo. Digital microplastics is the term I’ve been using to describe it.
@andrewfeeney @nyquildotorg
Citogenesis feeding on digital microplastics yields AI-generated misinformation gray goo
@nyquildotorg People generally strongly trust search engine results, but I've encountered some complete tosh recently.
We're being waterboarded with disinformation and soon people will be completely stupefied and disoriented

@nyquildotorg

flooding the zone with shit... *web scale*!

@nyquildotorg I was just listing some stuff on eBay today and I found they offer AI generated item descriptions now. They're obviously craptacular, so of course I just listed an item called "melted eggs."
@AKPAB @nyquildotorg I do like how large they made the AI button for the listing tool lol. Please use this garbage tool because reasonsπŸ₯Ί
@AKPAB @nyquildotorg
"previously enjoyed" is bad on it's own but in connection with "melted eggs" it's indescribable.
@nyquildotorg It's largely been fixed now, but for a good while if you searched on Google "famous rock bands without bass players" you'd have auto-populated results of "Led Zeppelin", "The Beatles", "The Who", and some other bands who, incidentally, contain some of the most iconic bass players of all time. All referenced from a fistful of AI-driven sites that fooled each other into that non-reality.

@chrisabides @nyquildotorg

That's wild. Is this from research you did or is there any kind of citation?

@dangoodin @nyquildotorg Literally just my own curiosity probably from about a year ago. I think I was in a conversation about bands and bass guitar (I myself am a bass player) and I searched and found this anomaly. Was true as recently as a few months ago, but now appears to have been "fixed" one way or another.

So this is all anecdotal, but some webarchive sleuthing could likely find what was up.

@chrisabides @dangoodin @nyquildotorg

This is a "Shirt without stripes" case: https://news.ycombinator.com/item?id=22925087 - search engines are really bad at understanding the word "without".

Shirt Without Stripes | Hacker News

@chrisabides @nyquildotorg And I thought Google-bombing was dead!
@pbx @chrisabides @nyquildotorg [Yakov Smirnoff voice] On the AI web, Google bombs itself!
@chrisabides @nyquildotorg I feel like this is the most realistic replication of human behavior by AI. Once false information has been passed around a certain number of times, a lot of actual humans will literally just believe it as fact.
@jamie @chrisabides @nyquildotorg at least humans are somewhat in touch with the real world and real eggs
@spinal @jamie @chrisabides I remember seeing a thing where "city kids" were told how we get eggs and simply refused to believe it
@nyquildotorg And if that isn’t enough to make people take β€œAI” off-line in any situation, I don’t know what would. Seriously… https://fedia.social/notes/9k8wbwdjnawy0860
Jer Warren (@nyquildotorg)

I've been talking about how AI will directly lead to *Idiocracy* (2006) scenarios (like Brawndo has what plants crave, Charlie Chaplin leading the 3rd Reich, etc) for some time, but today's update to the "can you melt eggs?" saga is as clear an illustration of how as I think it's possible to ever have. Quora's AI answers made up the melting point of eggs, and then Google picked it up and responded affirmatively that you can indeed melt eggs. Then people wrote articles about how stupid it is that Google says eggs can melt. Then Google fixes the answer. Then Google ingests an article about how stupid it is that Google says you can melt eggs, and suddenly Google starts answering affirmatively again that you can melt eggs, citing the article about how stupid Google is for thinking you can melt eggs.

Fedia.Social
@nyquildotorg this story really is the gift that keeps on giving

@nyquildotorg This is a totally wild guess here, because I haven't read any of those articles, but I imagine Google was able to do this because the articles contained whole runs of words that it could lift out of context and present in the opposite sense to the article. I did an example of this just the other week: https://mas.to/deck/@miblo/111092855958279904

Maybe the article writers could've prevented this by phrasing things in the kind of way I described in the post?

mas.to

Hello! mas.to is a fast, up-to-date and fun Mastodon server.

Mastodon hosted on mas.to
@miblo I don’t think this would actually work. I build these sorts of models for my own company. If you look at the math behind LLMs the slight change in wording isn’t going to significantly change the association between the words being used. It’s hard to explain without breaking out some seriously complicated theorems but these systems are better at organizing data than any human. They’d still be able to see the association there. An example of this is the paper just released by deepmind. In the paper the show how LLMs are able to losslessly compress data over 50% better than PNG and over 30% better than FLAC. That’s a model that’s only pretty much been trained on text. You’re not going to be able to change the associated vectors by simply slightly changing your phrasing. You can actually show this by reversing compression algorithms like gzip. When reversed they’ll output text like a LLM (just non sensical), but they’ll generate it all the same.
@nyquildotorg :A Gordian Knot, as envisioned by Rube Goldberg.
@nyquildotorg it's only a short step from "can you melt eggs" to "it's what plants crave", a very short step.
@ReverendMoose that's the bit that got me thinking in that direction, but also the entire history museum sequence where Charlie Chaplin was leading the 3rd reich
@nyquildotorg I can't believe the people pushing AI integrations like money too. We should hang out together.

@nyquildotorg So, we've automated bunk, but not the harder job of debunking.

Why am I not even surprised?

@nyquildotorg I think you need to be REALLY careful with your language + referencing That Movie. It's a celebration of ableism and eugenics, and deeply fascist even if it presents as "liberal humor" poking fun at capitalism and the willfully ignorant etc.
@nyquildotorg I guess #Google's #Search is indeed unfixably #Enshittified beyond useable and people should switch to better options like #DuckDuckGo...
@nyquildotorg Its the end of knowledge. We'd be better off turning the internet off and reading encyclopedias.
@nyquildotorg do eggs have electrolytes though?
@nyquildotorg From my experience of company board meetings AI is probably going to replace Wikipedia as the most stupid source to base business decisions on.
On one occasion another director referenced a Wikipedia article that said third party toner was as good as the genuine product. I had my laptop on the desk, and 10 minutes later i pointed out that the article no longer said that. (In fact my corrections still remain 10 years later).
Wikipedia:Wikiality and Other Tripling Elephants - Wikipedia

@nyquildotorg
There's also the thing with 'Glorbo' too.
I personally love the one about 'Goncharov (1973)'. Look up Lynda Carter's filmography on Google πŸ˜‰
This starts somewhere, but it stating from AI is very worrying.

@nyquildotorg

#AI

This exists in human-written scholarly papers as well. Citations get copied from one paper to the next without review. "Artificial Intelligence is no match for real stupidity." (everybody, ~2007)

I do think the [Artificial Intelligence] descriptor should be replaced with [Clever Idiocy] so we truly admit the limitations of the tools.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7500547/

MyCites: a proposal to mark and report inaccurate citations in scholarly publications

Inaccurate citations are erroneous quotations or instances of paraphrasing of previously published material that mislead readers about the claims of the cited source. They are often unaddressed due to underreporting, the inability of peer reviewers and ...

PubMed Central (PMC)

@Ralph @nyquildotorg

I prefer the term "Automated Idiocy"

That way we can use the same acronym

@bornach @Ralph @nyquildotorg but "automated mansplaining as a service" is so much more appropriate in the context of where, how and why this is mainly happening

@vanderZwan @bornach @nyquildotorg

I think mansplaining requires demeaning the audience. ChatGPT is a very polite raconteur. Hopefully OpenAI Codex will get us past the 'service' issue for general requests. I have to admit, AMAAS would be a good acronym.

https://en.wikipedia.org/wiki/OpenAI_Codex

OpenAI Codex - Wikipedia

@Ralph @bornach @nyquildotorg mansplaining is often demeaning, but I think the core element is a dude asserting something about topics they know nothing about with utterly unearned confidence. The demeaning part is doing that to people who often actually know more about the subject than them (like explaining to women how sexism works or something).

And asserting things with unearned confidence is very much what LLMs do as well.

@bornach @nyquildotorg

I think "Mechanical Savant" is what I was reaching for. It contains the idea that the box is only educated in one dimension (or domain of knowledge).

I like the idea of AI as an acronym (AI! AI! AI!🐢 ), but I worry about acronym overloading (it will confuse the AIs).

@Ralph @bornach oh you know, just me pedantically interjecting that "AI" is not an acronym, it's an initialism.

@nyquildotorg @bornach

I never complain about the need for truth and understanding (and I have the same compulsion).

@nyquildotorg @Ralph @bornach
that's not pedantry. Pedantry is pointing out *to you* that the class initialism is a subset of the class acronym. Being both definitely does not make it stop being one.

@obob @nyquildotorg @bornach

This gets better and better! Acronyms are pronounceable, making them subsets of initialism.

(What do you call a "pedant-gasm"? Cause that just sounds creepy.)

@Ralph @nyquildotorg @bornach incorrect. RaDAR for example, one of the older technical acronyms, is not an initialism.