I was in a meeting and I was trying to provide some relevant #NoAI type articles and could only find four quickly.

Could people add more articles to this thread please?

Please share too!

(mapcar #'emacsomancer objs) (@[email protected])

From Bruce Schneier: "All it takes to poison AI training data is to create a website: > I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission…. > Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled. > Sometimes, the chatbots noted this might be a joke. I updated my article to say “this is not satire.” For a while after, the AIs seemed to take it more seriously. These things are not trustworthy, and yet they are going to be widely trusted." https://www.schneier.com/blog/archives/2026/02/poisoning-ai-training-data.html #LLM #Veracity

types.pl
Anthropic AI safety researcher quits, says the ‘world is in peril’ - https://globalnews.ca/news/11664538/anthropic-ai-safety-researcher-mrinank-sharma-quits-concerns/
Anthropic AI safety researcher quits, says the ‘world is in peril’

Anthropic was founded in 2021 by a breakaway group of former OpenAI employees who pledged to design a more safety-centric approach to AI development.

Global News
Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’ - https://www.theguardian.com/technology/2025/dec/06/ai-research-papers
Artificial intelligence research has a slop problem, academics say: ‘It’s a mess’

AI research in question as author claims to have written over 100 papers on AI that one expert calls a ‘disaster’

The Guardian
David Gerard (@[email protected])

Amazon holds engineering meeting following AI-related outages: Ecommerce giant says there has been a ‘trend of incidents’ linked to ‘Gen-AI assisted changes’ https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de archive: https://archive.is/wXvF3 l o l

GSV Sleeper Service
Juliet E McKenna (@[email protected])

I am one of the nearly 10,000 authors contributing our names to Don't Steal This Book; a protest launched today at the London Book Fair. If AI developers wish to use our work in their software they can ask us for permission. If we agree, they can pay us to license it on clearly defined terms. This is how copyright operates. Tech companies ignoring copyright is theft. For the UK government to even consider allowing this to continue is a disgrace. https://www.theguardian.com/technology/2026/mar/10/thousands-authors-publish-empty-book-protest-ai-work-copyright #books #writing

The Wandering Shop
Jenniferplusplus (@[email protected])

"AI can make mistakes, always check the results" I fucking loathe this phrase and everything that goes into it. It's not advice. It's a threat. You probably read it as "AI is _capable_ of making mistakes; you _should_ check the results". What it actually says is "AI is _permitted_ to make mistakes; _you are liable_ for the results, whether you check them or not". Except "you" is generally not even the person building, installing, or even using the AI. It's the person the AI is used on: https://thepit.social/@peter/116205452673914720

Hachyderm.io
Christine Lemmer-Webber (@[email protected])

It turns out GenAI code changes are causing serious incidents and outages at Amazon with "high blast radius" https://arstechnica.com/ai/2026/03/after-outages-amazon-to-make-senior-engineers-sign-off-on-ai-assisted-changes/ Junior / middle engineers no longer allowed to push GenAI code to production without senior engineer review (HT @[email protected] ) EDIT: Better link above than before. Old one is here: https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de

social.coop

@rowlandm near the end of last summer I wrote up a few of my own things (largely distilled from existing sources, these are my framings but not necessarily my original ideas). Here's a few examples:

https://cs.wellesley.edu/~pmwh/advice/aiDRY.html

https://cs.wellesley.edu/~pmwh/advice/aiProductivity.html

https://cs.wellesley.edu/~pmwh/advice/aiHammer.html

And here's the collected page with a curated list of other sources, although it's a bit dated now.

https://cs.wellesley.edu/~pmwh/advice/index.html

Peter Mawhorter