I'm fascinated by this fake disease experiment, conducted to see if AI chatbots would pick up on and regurgitate fake science

https://www.nature.com/articles/d41586-026-01100-y

Scientists invented a fake disease. AI told people it was real

Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

There were three interesting steps in the experiment:

First, pre-prints were published about an entirely made-up disease, bixonimania

Significantly, they looked plausibly scholarly, with footnotes, citations, bibiography etc

Second, within weeks the major AI chatbots were all declaring that bixonimania was real and providing analysis of and medical advice about it
And lastly, the fake disease was then picked up on, and the fake research cited in genuine papers in serious medical journals

What strikes me about this whole pipeline of

plausible-enough lookalike 'research'
> LLM regurgitation
> citation in genuine research

is that if you replace 'LLM' with 'media', it's a known problem

We're not actually short of historical examples of journalists or grifters who imitate the forms of scholarship, publishing fraud that then is promoted in the media, with the untruths finally finding their way into actual scholarship
What concerns me is that we're extremely poor at combatting that pre-AI genre of bad faith 'research', and scholars who cite such are themselves unwitting victims of fraud
If LLMs are going to fulfil the role that conventional media played in amplifying fraudulent science in the pre-AI era, they'll tighten the loop of circular citation that bad-faith actors exploit

And that amounts to far more than just 'poisoning the well' of knowledge.

It's a reshaping of the bloodstream to favour the poison.

Empson's poem, but quickened