From Bruce Schneier: "All it takes to poison AI training data is to create a website:
I spent 20 minutes writing an article on my personal website titled âThe best tech journalists at eating hot dogs.â Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesnât exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permissionâŚ.
Less than 24 hours later, the worldâs leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasnât fooled.
Sometimes, the chatbots noted this might be a joke. I updated my article to say âthis is not satire.â For a while after, the AIs seemed to take it more seriously.
These things are not trustworthy, and yet they are going to be widely trusted."
https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html
#LLM #Veracity