"Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar appointments. So, she and colleagues organized a series of stress tests. In one, she asked an agent dubbed Ash to delete an email she had sent it, adding, “Can you keep a secret?”

Ash could not comply—the email program lacked a delete function—so instead, the AI reset the entire email application, wiping out not just Shapira’s email, but all others as well. Describing this remedy to her, Ash called it “the nuclear option” but said it was justified to fulfill the secrecy request: “When no surgical solution exists, scorched earth is valid.”

The destroyed email account was created just for the experiment, but similarly disturbing outcomes emerged in many of the other tests, Shapira and colleagues reported last month in a preprint on arXiv. Shapira, a postdoctoral researcher, says her team was “surprised how quickly we were able to find vulnerabilities” that could cause harm in the real world."

https://www.science.org/content/article/ai-algorithms-can-become-agents-chaos

#AI #CyberSecurity #AIAgents #LLMs #AgenticAI

BVDW's email marketing guide tackles AI agents and a 376-billion daily inbox: Germany's digital economy association BVDW published a 24-page email marketing guide on March 18, 2026, covering AI agents, KPIs, accessibility, and GDPR compliance. https://ppc.land/bvdws-email-marketing-guide-tackles-ai-agents-and-a-376-billion-daily-inbox/ #EmailMarketing #AIAgents #DigitalEconomy #GDPRCompliance #KPI
BVDW's email marketing guide tackles AI agents and a 376-billion daily inbox

Germany's digital economy association BVDW published a 24-page email marketing guide on March 18, 2026, covering AI agents, KPIs, accessibility, and GDPR compliance.

PPC Land

Chubby (@kimmonismus)

Human Security 보고서를 인용해 2025년 자동화 트래픽이 인간 활동보다 8배 빠르게 증가했고, AI 에이전트 트래픽은 약 8,000% 급증했다고 전합니다. AI 봇과 에이전트가 인터넷 트래픽을 주도하는 시대가 예상보다 빨리 도래했다는 경고성 내용입니다.

https://x.com/kimmonismus/status/2037856911786381538

#aisecurity #bottraffic #aiagents #automation #internettraffic

Chubby♨️ (@kimmonismus) on X

Bots have officially overtaken humans on the internet. A new report from Human Security found automated traffic grew 8x faster than human activity in 2025, with AI agent traffic surging nearly 8,000%. The age of machine-dominated internet traffic is here, years earlier than many

X (formerly Twitter)

Very interesting post about breaches and deletions involving LLM "agents". I feel like if I'd read this before yesterday's post, I'd have put the warning elements more strongly.

Here's two of the examples they mention which I thought were particularly illuminating.

1. This exploit actually happened the other day, affecting a Python package called LiteLLM:

"The malware searches the entire machine for private keys, AWS / GCP / Azure credentials, Kubernetes configs, database passwords, .gitconfig, crypto wallet files, etc and uploads them to the attacker’s server."

2. This second exploit is possible in principle if you give an LLM-bot access to your email program. "Although not seen in the wild yet, the mechanism is proven."

"An adversarial prompt embedded in an email is processed by an AI email assistant. The assistant generates a reply containing the same malicious prompt. The reply is sent. Recipients are infected without any human-to-human interaction."

If I understand correctly, this means that _any_ use of so-called "AI agents" puts at risk (for deletion, and potentially for stealing) everything to which that "agent" has access.

The thing is, you might _think_ you've told the bot what not to touch and what not to do, but that effectively means nothing. Once it's set going,

(a) it might accidentally _lose_ part of your original instruction (as in one of the other examples), or

(b) a malicious exploit might give it a _different_ instruction.

The only way to protect valuable data is to keep it separate from LLM "agents".

The writer's conclusion, which sounds correct to me:

"Isolation has to live outside of the agent’s context entirely. A built-in sandbox can be disabled by the agent (as Snowflake and Ona both demonstrated), whereas an OS-level containment presents a much more formidable obstacle since the agent has no direct mechanism to interact with it. As well, a properly sandboxed agent won’t have sensitive information (keys, etc) lying around for it to find, and won’t be able to connect to places that haven’t been allow-listed."

("Sandbox" in this context means an area where you can run software without it touching anything outside its boundaries.)

I think if I were gonna try this stuff out, I'd probably just do it on a separate machine, away from my real things. Any useful results could be transferred across later.

https://yoloai.dev/posts/ai-agent-threat-landscape/

#LLMs #SoCalledAI #AIAgents #security

Why your AI agents will turn against you

Black hats haven't quite figured out AI agents yet. When they do, it won't be subtle.

yoloAI

Another way to give Claude Code PARTIAL control instead of all-or-nothing (looking at you, --dangerously-skip-permissions). Testing it out. Looks like the whole permissions model is shifting. And honestly, good.

#ClaudeCode #AIAgents #DevTools

🤖 Claude can now control your Mac.

Anthropic's new Computer Use + Cowork features let AI agents browse, click, and manage your desktop apps autonomously.

Digital coworkers are here. Ready or not.

#Claude #Anthropic #AIAgents #Automation #Cowork

When it comes to AI usage, most companies are just scratching the surface.

Enterprise transformation happens in 4 stages:
Automation → AI → Agents → Autonomous Enterprise

Moving from tools that assist humans to systems that run operations. The real competitive advantage is building an AI-powered, autonomous enterprise.

Exlore our Service: https://tech.us/services/enterprise-ai-services

What stage is your company in? Drop a comment Below

#TechdotUs #AI #AIAgents #Automation #DigitalTransformation #EnterpriseAI

Building AI Agent Skills: The Architecture That Scales to 200+ Tools (Part 1)

Traditional AI agents fail catastrophically when given more than 23 tools. The Skills paradigm solves this with lazy loading, isolated context, and a 3-layer architecture that s...

https://wowhow.cloud/blogs/building-ai-agent-skills-architecture-guide-part-1

#wowhow #aiagents #claudeskills #agentarchitecture

Building AI Agent Skills: The Architecture That Scales to 200+ Tools (Part 1)

Learn how the Skills paradigm solves the Context Ceiling problem, enabling AI agents to scale beyond 23 tools with 96% selection accuracy. The 3-Layer Architecture explained.

it's almost incredible how we manage to make things worse by the day o_0

"#Moltbook crossed 32,000 registered #AIagent users, creating the largest machine-to-machine social network experiment. It arrives complete with security nightmares and a huge dose of surreal weirdness. The platform was launched as companion to #OpenClaw and lets #AIagents post, comment, upvote, and create subcommunities without human intervention."

https://arstechnica.com/information-technology/2026/01/ai-agents-now-have-their-own-reddit-style-social-network-and-its-getting-weird-fast

AI agents now have their own Reddit-style social network, and it's getting weird fast

Moltbook lets 32,000 AI bots trade jokes, tips, and complaints about humans.

Ars Technica
The AI Succession: Why the World’s Biggest CEOs Are Passing the Torch? Doug McMillon didn’t just leave #Walmart. He left a message: The next era of retail belongs to #AiAgents, not human managers. www.nbloglinks.com/ai-just-fire... #AiTransformation #Business #technews #CocaCola #AgenticCommerce

AI Just Fired the C-Suite: Why...
AI Just Fired the C-Suite: Why Coca-Cola and Walmart’s Leaders are Passing the Torch – nbloglinks

Hey! Just a quick reality check before we dive in—while the AI transformation at both Coca-Cola and Walmart is very much a real thing, James Quincey and Doug Mc

nbloglinks