Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution. www.nature.com/articles/d41...

Scientists invented a fake dis...
Scientists invented a fake disease. AI told people it was real

Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

On LEV:RED 4 years ago we built a crypto guy a personalized disinformation rabbit hole, and even that’s now outdated. Just turn your disinformation lope in the wild and let it propagate.
@johnrogers.bsky.social I saw a printscreen of a question to an AI about Robert Hooke's activities during the London plague and the AI answered talking about Daniel Waterhouse, a completely ficticious character from Neal Stephenson novels who was hanging out with Hooke is the story.
@johnrogers.bsky.social I hope everyone is getting their #bixonimania booster shots
@Twotired @johnrogers.bsky.social no need, bixonimania is cured by Vitamin B13.
@johnrogers.bsky.social Sure, just feed in The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases and a game guide for Theme Hospital. What could go wrong?
@johnrogers.bsky.social oh my, this is such a treasure 😁
> Even if readers didn’t make it all the way to the ends of the papers, they would have encountered red flags early on, such as statements that “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”.
Artificial fucking stupidity 🤣

@johnrogers.bsky.social

I wouldn't worry too much - the water was already well muddied.

I pointed out elsewhere, libraries are/were divided in two: "fiction" and "non-fiction". For a very good reason - this part is fun, and this part is supposed to be the bits you can trust.

But the internet - not so much. And by ingesting the internet as a corpus, you've already spent billions of dollars contaminating your corpus with not just bullshit, but realistic sounding bullshit like discredited studies.

LLMs can be useful for many things. Truth isn't one of them, by nature and training. If your use-case doesn't care about or can compensate against the bullshit, maybe that doesn't matter.

I'd honestly be interested in a LLM trained against a purely non-fiction corpus. Something well-vetted. I wonder if it would be any better. Probably not.

@johnrogers.bsky.social This has been a known issue for a while; it's incredibly easy to pollute the data because the models have no capacity for critical thinking. If it's online and no one is disputing it, it's the truth

@johnrogers.bsky.social
‼️‼️

> One paper’s acknowledgements thank “Professor Maria Bohm at _The Starfleet Academy_ for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise”. Both papers say they were funded by “the Professor _Sideshow Bob Foundation_ for its work in advanced trickery. This works is a part of a larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.

@johnrogers.bsky.social LLM’s are litetally destroying science.
@gimulnautti Not as efficiently as you're literally destroying clear and coherent language.
@johnrogers.bsky.social ...phoney researcher named Lazljiv Izgubljenovic, ..😂
(translates to: Lost Liar)
@johnrogers.bsky.social But sure, tell me again about how this will solve the climate crisis if we just believe/invest hard enough.
@johnrogers.bsky.social that 1998 iMac in the picture was in the doctor's office for other reasons though ( birds nesting inside crt)

@johnrogers.bsky.social

I can’t read the end of the article, but I suspect that now that Bixonimania has made its way into all the major language models, the only way to restore consistency in the dataset is for us to actually develop a condition as described in the preprints. 🙃

@johnrogers.bsky.social

The part of this that worried me the most was actual scientific publications citing the BS research, suggesting AI was used to publish them

@johnrogers.bsky.social

Once upon a time, some of the worst people in the world saw the directions things were going in and said "this free and accurate info for all….is bad for us, we can't compete with it! We need to muddy the waters and roll this out FAST AND HARD"

And they came up with

Automated Idiocy
and/or
Augmented Imbecility

@johnrogers.bsky.social
Not surprised. A major problem with LLMs is the way they are trained. Sucking in without checking if its pure fiction, fact, lies purporting to be factual, etc first is a simple example of garbage in garbage out (GIGO)
@johnrogers.bsky.social On the other hand, the same would probably have happened in this case if you replace LLMs with journalists.
@oscherler @johnrogers.bsky.social as if half of us wouldn't just read the headline, look at the plots and go, "science told us so"
@johnrogers.bsky.social Already legend: fake-disease BIXONIMANIA
@johnrogers.bsky.social and far more humans gullibly believed in ivermectin or even injected bleach as a cure for Covid.