I’m working on an AI policy for my org that allows us to opt out of AI note taking and prohibits AI in our comms/storytelling. here is my list of reasons for the policy, but my board is asking me to cite sources. Can you help me with any good references you would cite for any of these? (Or an edit or restatement where I’ve gotten it wrong or inaccurate?)

*if you want to argue about why I shouldn’t have this policy kindly crawl into a hole in the ground and cover yourself with soil

@seachanger Here's a recent Guardian article that speaks to item #2: https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health

EDIT: This one needs a content warning for suicide, to be clear.
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Kate Fox says Joe Ceccanti was the ‘most hopeful person’ before he started spending 12 hours a day with a chatbot

The Guardian
@seachanger Not sure about the methodology behind this one, but I've heard about it at least (re: #10): https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

@seachanger Regarding item #5: https://www.npr.org/2025/09/05/nx-s1-5529404/anthropic-settlement-authors-copyright-ai

It's important to note, though, that the ruling walks a fine line: training of Claude was considered to be "fair use" (not a ruling I personally agree with but hey), however, the fact that Anthropic pirated all the materials was
not. Anthropic settled on this claim rather than take it to trial, it seems.
@seachanger speaking to maybe 6 and 7: not all that is sold as “AI” is actually AI, which isn’t quite what I had in mind while looking for privacy and safety concerns but it’s certainly related

https://data-workers.org/france/
Behind The Face of AI, by Clara and B.

This short comic details the experiences of data workers in France working for Scale AI as human chatbots. They explore what it is to be human and the consequences of acting like a machine.

Data Workers' Inquiry
@seachanger speaking to #3 a little: https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis

The other companies aren’t quite as blatant as Musk. Not sure I have any good definitive links on that; they definitely like to hide and fudge the numbers (“watt per inference!”) so I was trying to find something about the data center strain on grid capacity, but a lot of is paywalled…
Elon Musk’s xAI datacenter generating extra electricity illegally, regulator rules

Win for Memphis activists who say ‘Colossus’ facilities add extra pollution to already overburdened communities

The Guardian
@aud @seachanger I was going to suggest a rewording as courts have deemed it isn’t theft to use it (also disagree with the courts) but maybe indicating that materials for training are often collected in a way that is illegal and has actively seen citizens prosecuted in the past.

@aud @seachanger That's about the only actual study we have and it has a fairly low sample size, unfortunately. There are some other articles going around about the high cost and failure rates of AI projects though.

Methodology-wise, it's okay and at least tries to control for perception vs reality.

@seachanger i will check around for cites. Having dealt with boards, the first thing that came to mind was that "AI can't donate"
DAIR (Distributed AI Research Institute)

DAIR is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.

DAIR (Distributed AI Research Institute)

@seachanger and you may also find this one useful, including its citations

https://pmc.ncbi.nlm.nih.gov/articles/PMC10186390/

Checking your browser - reCAPTCHA

@seachanger I would look to the work of @emilymbender and her colleagues
@sarae i have followed them for a while but now I am trying to just get some clear sources pasted in that people might know of

@seachanger @sarae

The endnotes in our book are full of sources:
https://thecon.ai

@seachanger @sarae

Also, not sure what you mean by sources people might know of, but ... our book is a source!

@emilymbender
Thank you! I just thought people might reference recent stories or reports that back the specific points I was making. I am also adding your book and a few others from https://monetdiaz.com/books-critical-AI.html

@sarae

@seachanger @emilymbender @sarae

Not quite at my fingertips right now and I'll go have a look, but the consulting firm Deloitte is a "case study as a dire warning", as is Air Canada - both were held to be liable and had to reimburse clients for letting AI fuckups into their official products or communications.

@seachanger @emilymbender @sarae

Boards are usually much more receptive to "well, this is a risk that could get your own ass handed to you in court, minus any cash you had in your back pocket" than they are to "this is a highly problematic tool that is deceptively easy to misuse badly" because everyone thinks everyone else who got in trouble was just not as smart as they are.

@johannab @seachanger @emilymbender @sarae

Just to elaborate on that excellent point:

1. You may be held legally liable for things these tools do that you have no way of controlling.

@johannab @seachanger @emilymbender @sarae

2. Right now all these companies are operating at extravagant losses in order to entice you to use their products. Once you are dependent on them, they plan to recover their investment by jacking up prices and operating as monopolies. Don’t forget to factor that into your cost/benefit analysis.

The reasons you list are more important ones, but cost and liability may get their attention.

Deloitte to pay money back to Albanese government after using AI in $440,000 report

Partial refund to be issued after several errors were found in a report into a department’s compliance framework

The Guardian

@seachanger I probably do here but would need to do some cross referencing I can’t do at the moment

https://ai-sucks-actually.fyi/

AI Sucks, Actually

that's it, that's the thesis

[…] Even if the accuracy problems were solved, and AI-generated summaries reliably captured all the essential points of a text, it would still be a bad idea to use them. Creating your own summaries is a crucial step in any literature study. When you read and summarize a text, you create the neural connections necessary to memorize and apply the information well in an exam, experiment, or research paper. Generating it with a click is a harmful form of cognitive offloading and will erode these skills. Writing it yourself will reveal the nuances of an academic text and allow you to register those elements that you deem essential to whatever you are working on. 

https://www.tue.nl/en/our-university/library/library-news/24-02-2026-are-ai-generated-summaries-suitable-for-studying-and-research

@darby3

#theaicon #aihype #llm

Are AI-generated summaries suitable for studying and research?

Despite didactic, ethical, and environmental concerns, the use of GenAI is on the rise in academia. For most applications, the jury is still out on whether and how they will benefit education and research in the long term. But it’s already safe to conclude that one popular use case is, in fact, a bad one: AI-generated summaries.

@oatmeal @darby3 Yes, if we don't use our skills, they will degrade.
💡𝚂𝗆𝖺𝗋𝗍𝗆𝖺𝗇 𝙰𝗉𝗉𝗌📱 (@[email protected])

1/x #MathsMonday #Maths #Math Over time I've saved many screenshots of #AI #slop #aiSlop stuffing up #Mathematics big time, and on occasion I've had cause to reshare them, and at times I have cursed that I can only attach 4 pics per post. Then I realised, what am I worried about - just post them all in a thread and then I can just link to the thread (or individual screenshots), and can add to it as more come up 🙂 P.S. feel free to reply with more I hereby present to you, AI's greatest 5hits...

dotnet.social

@seachanger

MIT recently released a study on the long term cognitive effects of AI use. (Spoiler: they're not good effects.)

https://publichealthpolicyjournal.com/mit-study-finds-artificial-intelligence-use-reprograms-the-brain-leading-to-cognitive-decline/

MIT Study Finds Artificial Intelligence Use Reprograms the Brain, Leading to Cognitive Decline - Science, Public Health Policy and the Law

By Nicolas Hulscher, MPH

Science, Public Health Policy and the Law
Informe sobre los centros de datos de Aragón- El precio de las nubes – Tu Nube Seca Mi Río

@cafechatnoir @seachanger pinging @WeirdWriter, who put in beautiful, powerful words how that experience of “semantic ablation” affected his writer friend. At least, it seems to be recoverable, but at what cost…
@juandesant @cafechatnoir @seachanger Yay thank you for tagging! My narrative is at the end. I’ve seen it have a drastically negative psychological consequences for everybody that uses it. Writers, readers, anybody really. I recently had a scenario where a trance friend of mine just quit writing all altogether because, on the one hand everybody was praising her for doing such a fantastic job with prompting the thing when she never used an LLM at all. The truly horrifying thing was, the positive comments were more disturbing because they praised an LLM for creating it when she never touched an LLM in her life. I’m going to write about it, but right now, the emotions are swirling around and I need to calm down after these incidences AnyWho, but if you have not read it yet, the first story is https://sightlessscribbles.com/the-colonization-of-confidence/
The Colonization of Confidence., Sightless Scribbles

A fabulously gay blind author.

@cafechatnoir @seachanger It sounds less like presence of bad effects of not-writing and more like absence of good effects of writing.

@seachanger

Oh, and not necessarily something you can "cite" - but on the prohibition on AI in comms: The people you're communicating with deserve your time and energy in creating those messages.

(I'm still salty about one of our executives sending out an intro email to use where he gleefully announced he used ChatGPT for it. How little does he think of us if he can't even be arsed to write his own email?)

@seachanger contact a librarian ...not sure if you are connected to a university. I wasn't, but university librarians were always very happy to help me, and they're fast.
Does AI Actually Free Up Workers’ Time? | Research UC Berkeley

Berkeley Haas researchers evaluate the impact of AI in the workforce.

@seachanger this is a great resource, I think you will find some sources here: https://libguides.amherst.edu/genAI/ethics
Research Guides: Generative AI: Ethics and Costs

Research Guides: Generative AI: Ethics and Costs

@arod oh wow yes that is what I was looking for
@arod @seachanger great list of the reasons to not use AI.

@seachanger #6. Which links to the Standford report it discusses.

https://www.kiteworks.com/cybersecurity-risk-management/ai-data-privacy-risks-stanford-index-report-2025/

Anecdotally, even though Kagi Translate has instructions to not divulge its prompt with anyone, people are easily able to get it to do so by asking it to create or show the output of programs that do exactly that.

I can dig up those examples if you want.

AI Data Privacy Wake-Up Call: Findings From Stanford's 2025 AI Index Report

New Stanford AI Index Report reveals a 56% surge in AI privacy incidents and declining public trust, offering essential guidance for organizations to safeguard sensitive data and navigate intensifying regulations.

Kiteworks | Your Private Data Network

@seachanger here are a couple of links on ai's role in digital colonialism in africa and south america in case that's helpful!

https://www.ictworks.org/african-digital-colonialism/ (a synopsis of https://www.ictworks.org/wp-content/uploads/2025/01/African-Digital-Colonialism.pdf)
https://peopledaily.digital/insights/the-hidden-cost-of-ai-africas-invisible-workforce-and-digital-servitude (ironically uses an ai generated stock image as the article header)
https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/ (keeps trying to sell me ai books lol)

African Digital Colonialism is the New Face of Worker Exploitation - ICTworks

African digital colonialism is exploitation of labor, resources, and data by Silicon Valley technology companies with no equitable returns

ICTworks
@seachanger Here's one potential reason: a recent meta-analysis concluded that the general public is terrified of AI and has near-zero trust in AI products https://onlinelibrary.wiley.com/doi/10.1002/cb.70144?af=R

@seachanger don't they have an "AI IS GOING GREAT" website?

https://www.web3isgoinggreat.com/

like they had for crypto shit.

Web3 is Going Just Great

A timeline recording only some of the many disasters happening in crypto, decentralized finance, NFTs, and other blockchain-based projects.

@tootbrute @seachanger

I think what you're looking for is https://pivot-to-ai.com/ by @davidgerard.

Pivot to AI

It can't be that stupid, you must be prompting it wrong

Pivot to AI
Project Overview ‹ Your Brain on ChatGPT – MIT Media Lab

Check project's website: https://www.brainonllm.comWith today's wide adoption of LLM products like ChatGPT from OpenAI, humans and businesses engage and u…

MIT Media Lab
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

kdwarn: Programming Links

A software developer's website.

@seachanger @janeishly I really like what you're doing here. You may want to add that there is little transparency around the training data. Many models are trained on data that contains harmful biases and prejudices against BIPOC, LGBT+ people, etc. Also may involve exploitation of labor in undeveloped countries to assist with training. Good luck with getting a strong policy approved 👊
People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

Marriages and families are falling apart as people are sucked into fantasy worlds of spiritual prophecy by AI tools like OpenAI's ChatGPT

Rolling Stone
@seachanger you miss the bias in training these models
@seachanger lol, tbh, I think the way here is to have chat-GPT hallucinate sources, provide those and let the board figure out that you just gave them another reason...

@seachanger

In addition to what others have said, for #2: AI Mental Health Project

Our Mission — AI Mental Health Project

AI Mental Health Project
@seachanger hi. Just a small thing but the Anthropic lawsuit on training against author works is an excellent example https://apnews.com/article/anthropic-copyright-authors-settlement-training-f294266bc79a16ec90d2ddccdf435164
Anthropic pays authors $1.5 billion to settle copyright infringement lawsuit

Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement. The company has agreed to pay authors or publishers about $3,000 for each of an estimated 500,000 books covered by the settlement.

AP News

@seachanger
I'd suggest two additional items...

First, if you ask an LLM (which was trained on a variety of licensed works) to code something for you, it may dump licensed code, verbatim, without the accompanying license. This will create a legal minefield of risks.

Copyright is another angle.. There have been a lot of attention-grabbing headlines, but from what I understand, in the US, in order to copyright a work, it requires significant human contribution, far beyond just a human prompting an AI. If your company just asks an LLM to spit out a product, don't expect it to be protected work.

https://www.techspot.com/news/106562-us-copyright-office-rules-out-copyright-ai-created.html

@seachanger @IrrationalMethod I think it’s a great idea and if you’re permitted to share the final policy would love to see it. I work in the NFP space in Australia and know orgs here need help with policies on this front.