I just want simple tools that do one thing well. The tool must get the fuck out of the way of what I'm trying to accomplish. It also must reject LLMs in its development.
So when I look around at code editors I'm kinda fucked. Vim is AI slop. Emacs isn't yet, but it's also not a simple tool that does one thing well. Almost every IDE that's cross platform enough for my needs either gets in my way or is fully embracing AI slop.
😡
If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️
Full source: https://www.youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.) #llmslop #antislop #antiai #noai #stopai #llm #llms #ai #generativeAI #opensource
Help me boost this post if you're curious what the Linux foundation thinks: https://hachyderm.io/@ell1e/116285351290767548
AI isn't a tool.
Engadget: Crimson Desert developer apologizes and promises to replace AI-generated art. “The developer behind the open-world RPG Crimson Desert has issued an official apology after players discovered several instances of AI-generated art in the game. Pearl Abyss posted on X that it released the game with some 2D visual props that were made with ‘experimental AI generative tools’ and forgot to […]
https://rbfirehose.com/2026/03/28/engadget-crimson-desert-developer-apologizes-and-promises-to-replace-ai-generated-art/
Engadget: Crimson Desert developer apologizes and promises to replace AI-generated art. “The developer behind the open-world RPG Crimson Desert has issued an official apology after players di…
I have a subquestion, perhaps the people in this thread will be able to help:
Surely there are ways to secure documents against being read by an 'AI' chatbot, right? I'm thinking invisible text that gives instructions, but less naive.
I found some examples in this paper [https://arxiv.org/abs/2506.11113], but 1) the paper is written from the point of view of trying to overcome these attacks (yikes), and 2) all examples given involve rewriting the text itself.
I wonder if there's an easier way to either break the technology completely, or at least detect the 'AI' usage somehow.

Peer review is essential for maintaining academic quality, but the increasing volume of submissions places a significant burden on reviewers. Large language models (LLMs) offer potential assistance in this process, yet their susceptibility to textual adversarial attacks raises reliability concerns. This paper investigates the robustness of LLMs used as automated reviewers in the presence of such attacks. We focus on three key questions: (1) The effectiveness of LLMs in generating reviews compared to human reviewers. (2) The impact of adversarial attacks on the reliability of LLM-generated reviews. (3) Challenges and potential mitigation strategies for LLM-based review. Our evaluation reveals significant vulnerabilities, as text manipulations can distort LLM assessments. We offer a comprehensive evaluation of LLM performance in automated peer reviewing and analyze its robustness against adversarial attacks. Our findings emphasize the importance of addressing adversarial risks to ensure AI strengthens, rather than compromises, the integrity of scholarly communication.
They have finally done it! Yahoo has totally nailed any hope of me using it again! I remember when it was good 🥲
via @AssociatedPress

SAN FRANCISCO (AP) — Internet trailblazer Yahoo is exploring technology’s next frontier with Scout, an answer engine powered by artificial intelligence. Scout seems insightful, based on its response to a question posed by The Associated Press about why one of Silicon Valley’s brightest stars faded …