https://www.axios.com/2026/03/29/claude-mythos-anthropic-cyberattack-ai-agents
I have a question about this article that will unfortunately expose how little I truely understand about “AI agents.” If you want to avoid an agent being used to target your company do you have to avoid ChatGPT and co all together or just specific tasks?
#cybersecurity #aiagents
Behind the Curtain: AI's looming cyber nightmare

Anthropic fears its unreleased model might unleash a wave of cyberattacks.

Axios

@angelaclinkscales I don’t believe you can avoid an AI Agent targeting your company, when it is used by an external party.

This article highlights the threat of employees using AI and Agentic AI in unmonitored, uncontrolled ways. It’s very easy to fire up an AI tool and give it access to sensitive or proprietary data, exposing it to methods of compromise not anticipated.

I think that’s the danger the article warns of. I feel like where companies need to go is to educate people about which AI systems and use cases are OK, and which are not. Meaning, start with a policy, provide education, and then governance.

@scottwilson In the article it sounds like the threat is from within the company due to someone creating agents that go rogue. But if I understand you correctly it could come from any company (that has some of your data, I assume.)

@angelaclinkscales I should have been more clear. Both of those things are threats.

You’re right about the point of the article. I believe the worry is that employees can very easily create Agentic AI tools, then thoughtlessly grant those tools access to everything - company email and docs, HR platforms, calendars, etc.