What are good examples of concrete acts of resistance against "AI"? I am putting together an overview and am surely missing great stuff!

I have:
Data centre opposition (thx to @gerrymcgovern )
Examples of sabotage from @asrg
Some ideas about practical refusal from @danmcquillan

Also everyday acts are great (such as not following orders to use "AI") if they are documented somewhere, somehow. Most references tend to be rather vague (e.g. in this otherwise great article
https://restofworld.org/2026/techno-negative-thomas-dekeyser-fighting-ai/)

Why refusing AI is a fight for the soul

Author Thomas Dekeyser explains why modern resistance to Big Tech is a deeply sane response to a narrow vision of humanity.

Rest of World

@alineblankertz

I have a subquestion, perhaps the people in this thread will be able to help:

Surely there are ways to secure documents against being read by an 'AI' chatbot, right? I'm thinking invisible text that gives instructions, but less naive.

I found some examples in this paper [https://arxiv.org/abs/2506.11113], but 1) the paper is written from the point of view of trying to overcome these attacks (yikes), and 2) all examples given involve rewriting the text itself.

I wonder if there's an easier way to either break the technology completely, or at least detect the 'AI' usage somehow.

#antiAI

Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks

Peer review is essential for maintaining academic quality, but the increasing volume of submissions places a significant burden on reviewers. Large language models (LLMs) offer potential assistance in this process, yet their susceptibility to textual adversarial attacks raises reliability concerns. This paper investigates the robustness of LLMs used as automated reviewers in the presence of such attacks. We focus on three key questions: (1) The effectiveness of LLMs in generating reviews compared to human reviewers. (2) The impact of adversarial attacks on the reliability of LLM-generated reviews. (3) Challenges and potential mitigation strategies for LLM-based review. Our evaluation reveals significant vulnerabilities, as text manipulations can distort LLM assessments. We offer a comprehensive evaluation of LLM performance in automated peer reviewing and analyze its robustness against adversarial attacks. Our findings emphasize the importance of addressing adversarial risks to ensure AI strengthens, rather than compromises, the integrity of scholarly communication.

arXiv.org