Stubsack: weekly thread for sneers not worth an entire post, week ending 1st March 2026

https://awful.systems/post/7380892

Agents of Chaos - arxiv.org/abs/2602.20021? - h/t naked capitalism

We report an exploratory red-teaming study of autonomous language model–powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies

Pretty fast turnaround, OpenClaw is from a couple weeks ago. Flag planting used to take a few months.

Agents of Chaos

We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies. Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. This report serves as an initial empirical contribution to that broader conversation.

arXiv.org

i don’t know if it’s a convention even in the “serious” AI research industry to use anthropomorphic jargon, but it drives me up a wall to see shit like this:

17.6 Theory of Mind Limitations in Agentic Systems

Agentic systems don’t have “theory of mind”, they cannot infer mental state. they are probabilistic word generators operating within non-deterministic frameworks. They can have a system prompt that tells them to generate text that appears to be an interpretation of another entity’s “mental state”, and they can even be directed to refer to it as context, but it is not theory of mind and the entity they’re generating in reference to may not have a mind at all.

I wish there was some way to stop these dorks from stealing the imprimatur of cognitive science.