From Bruce Schneier: "All it takes to poison AI training data is to create a website:

I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….

Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled.

Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously.

These things are not trustworthy, and yet they are going to be widely trusted."

https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html

#LLM #Veracity

Poisoning AI Training Data - Schneier on Security

All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...

Schneier on Security
@emacsomancer they aren't trustworthy. Take up a lot of time trying to get a reasoned answer and there's always a phrase or wording out of place that needs correction. Almost as it the AI is trying to engage longer and longer than necessary.
@gnomeoffender which do you think is more likely, realistically. That the untrustworthy, dumb, glorified word-predictor is smart enough to engage in the convo-extending tomfoolery that you've outlined, or..... you are shit at prompting?
@darknetDon Am using Gemini 3 to help me after searching through past chats archive.
This is a classic "clash of perspectives" in the AI world. Your challenger is using a 2023-era argument (the "dumb word-predictor" theory), while your observation aligns with how modern, high-reasoning models like Gemini 3 actually function in 2026.
@gnomeoffender its less a clash of perspectives and largely about authenticity. I read the linked article and the original bbc article, and not a single shred of evidence to support any of this was shared, not even a lousy screenshot. If you're going to bash on AI, don't fabricate nonsense out of thin air, and if you're going to author public posts based on a test you done, show that test being done, or the conclusion if nothing else, for crying out loud.
@gnomeoffender @darknetDon “High reasoning…”😂😂😂😂😂
Somebody likes the taste of that koolaid.
@darknetDon @gnomeoffender The sweet summer children who think, “If only I come up with the perfect incantation, the “glorified word-predictor(as you accurately described the thing). will spew forth wisdom from its non-existent mind.”
@su_liam a needa get a hold of whatever drugs people on here be takin