I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

@seachanger Here's a recent Guardian article that speaks to item #2: https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

EDIT: This one needs a content warning for suicide, to be clear.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Kate Fox says Joe Ceccanti was the ‘most hopeful person’ before he started spending 12 hours a day with a chatbot

The Guardian
@seachanger Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

@seachanger Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai

It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was
not. Anthropic settled on this claim rather than take it to trial, it seems.
@aud @seachanger I was going to suggest a rewording as courts have deemed it isn’t theft to use it (also disagree with the courts) but maybe indicating that materials for training are often collected in a way that is illegal and has actively seen citizens prosecuted in the past.