Here's what that Claude Code source leak reveals about Anthropic's plans
A persistent agent, stealth "Undercover" mode, and... a virtual assistant named Buddy?!
https://arstechnica.com/ai/2026/04/heres-what-that-claude-code-source-leak-reveals-about-anthropics-plans/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social

We can quit #cybersecurity and just go farm potatoes or something. After 25 years of #appsec one of the most talked-about tech companies invents a daemon process that

makes use of a file-based “memory system” designed to allow for persistent operation across user sessions.

Sure. Just store your system instructions in a random text file.

Why are we installing endpoint protection on this system?

Why do we verify cryptographic signatures on software updates to this system?

Why are we building a zero trust security environment?

Why do we do scan email to avoid social engineering emails?

Our AI-assisted users are gonna YOLO right past all that. And if they can’t get past our #security controls, this agentic Frankenstein will write itself some markdown and work quietly in the background figuring out how to bypass something the user couldn’t bypass on their own.

This is #infosec in 2026

This article about #anthropic ‘s #claude CLI is also hysterical (in a making me want to give up #security and join a commune kind of hysterical) because of the anthropomorphising of the AI.

What is the most frustrating aspect of LLMs? Many would use the anthropomorphic term “hallucination.” Apparently #hallucinations are bad but “dreams” are good?

When a user goes idle or manually tells Anthropic to sleep at the end of a session, the AutoDream system would tell Claude Code that “you are performing a dream—a reflective pass over your memory files.”

“Why does my code say that Wonder Woman is running a taco truck downtown and I’m the only person who can save her dog?” Oh. Right. It was dreaming.