The rise of technofascism is real and must be stopped. Please read this to learn how to identify AI-generated content!✊Antifascists can never beat fascism with fascist tools! ✊ #genAI #AntiAI #NoAI #QuitGPT #AI https://www.getcybersafe.gc.ca/en/resources/recognize-artificial-intelligence-ai-9-ways-spot-ai-content-online
Recognize artificial intelligence (AI): 9 ways to spot AI content online - Get Cyber Safe

Learning how to identify AI generated content is key in making sure you’re not misled. Here are some tips to help you out.

Get Cyber Safe

I just want simple tools that do one thing well. The tool must get the fuck out of the way of what I'm trying to accomplish. It also must reject LLMs in its development.

So when I look around at code editors I'm kinda fucked. Vim is AI slop. Emacs isn't yet, but it's also not a simple tool that does one thing well. Almost every IDE that's cross platform enough for my needs either gets in my way or is fully embracing AI slop.

😡

#NoAI #AntiAI #programming

If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️

Full source: https://www.youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.) #llmslop #antislop #antiai #noai #stopai #llm #llms #ai #generativeAI #opensource

Help me boost this post if you're curious what the Linux foundation thinks: https://hachyderm.io/@ell1e/116285351290767548

Blåhaj Lemmy - Choose Your Interface

@gwenhael @eschaton It links this one too: https://codeberg.org/brib/slopfree-software-index which is the "opposite" type of list. #slopfree #noai #antiai
slopfree-software-index

A list of open-source projects that reject AI-generated code

Codeberg.org

Engadget: Crimson Desert developer apologizes and promises to replace AI-generated art. “The developer behind the open-world RPG Crimson Desert has issued an official apology after players discovered several instances of AI-generated art in the game. Pearl Abyss posted on X that it released the game with some 2D visual props that were made with ‘experimental AI generative tools’ and forgot to […]

https://rbfirehose.com/2026/03/28/engadget-crimson-desert-developer-apologizes-and-promises-to-replace-ai-generated-art/
Engadget: Crimson Desert developer apologizes and promises to replace AI-generated art

Engadget: Crimson Desert developer apologizes and promises to replace AI-generated art. “The developer behind the open-world RPG Crimson Desert has issued an official apology after players di…

ResearchBuzz: Firehose

@alineblankertz

I have a subquestion, perhaps the people in this thread will be able to help:

Surely there are ways to secure documents against being read by an 'AI' chatbot, right? I'm thinking invisible text that gives instructions, but less naive.

I found some examples in this paper [https://arxiv.org/abs/2506.11113], but 1) the paper is written from the point of view of trying to overcome these attacks (yikes), and 2) all examples given involve rewriting the text itself.

I wonder if there's an easier way to either break the technology completely, or at least detect the 'AI' usage somehow.

#antiAI

Breaking the Reviewer: Assessing the Vulnerability of Large Language Models in Automated Peer Review Under Textual Adversarial Attacks

Peer review is essential for maintaining academic quality, but the increasing volume of submissions places a significant burden on reviewers. Large language models (LLMs) offer potential assistance in this process, yet their susceptibility to textual adversarial attacks raises reliability concerns. This paper investigates the robustness of LLMs used as automated reviewers in the presence of such attacks. We focus on three key questions: (1) The effectiveness of LLMs in generating reviews compared to human reviewers. (2) The impact of adversarial attacks on the reliability of LLM-generated reviews. (3) Challenges and potential mitigation strategies for LLM-based review. Our evaluation reveals significant vulnerabilities, as text manipulations can distort LLM assessments. We offer a comprehensive evaluation of LLM performance in automated peer reviewing and analyze its robustness against adversarial attacks. Our findings emphasize the importance of addressing adversarial risks to ensure AI strengthens, rather than compromises, the integrity of scholarly communication.

arXiv.org

They have finally done it! Yahoo has totally nailed any hope of me using it again! I remember when it was good 🥲

#AISlop #NoAI #antiAI #AI

via @AssociatedPress

https://flipboard.com/@associatedpress/technology-uvt65hdqz/-/a-MJnHpZP8RqSrjlzVef4LZw%3Aa%3A3199720-%2F0

Yahoo turns to AI-powered answer engine Scout to lead it back to its roots in online search

SAN FRANCISCO (AP) — Internet trailblazer Yahoo is exploring technology’s next frontier with Scout, an answer engine powered by artificial intelligence. Scout seems insightful, based on its response to a question posed by The Associated Press about why one of Silicon Valley’s brightest stars faded …

Associated Press - By MICHAEL LIEDTKE

I may have to leave the AI Slack channel at work. Again. I'm so fucking close to making statements that would get me into trouble.

#AntiAI #NoAI