Bloody hell. Researchers invented a disease, published two fake papers to see if LLM’s would ingest them and kick them up as fact — and then it broke containment and all the major AI’s bought in. Information pollution. www.nature.com/articles/d41...

Scientists invented a fake dis...
Scientists invented a fake disease. AI told people it was real

Bixonimania doesn’t exist except in a clutch of obviously bogus academic papers. So why did AI chatbots warn people about this fictional illness?

@johnrogers.bsky.social

I wouldn't worry too much - the water was already well muddied.

I pointed out elsewhere, libraries are/were divided in two: "fiction" and "non-fiction". For a very good reason - this part is fun, and this part is supposed to be the bits you can trust.

But the internet - not so much. And by ingesting the internet as a corpus, you've already spent billions of dollars contaminating your corpus with not just bullshit, but realistic sounding bullshit like discredited studies.

LLMs can be useful for many things. Truth isn't one of them, by nature and training. If your use-case doesn't care about or can compensate against the bullshit, maybe that doesn't matter.

I'd honestly be interested in a LLM trained against a purely non-fiction corpus. Something well-vetted. I wonder if it would be any better. Probably not.