> The leak, which Meta confirmed, happened when an employee asked for guidance on an engineering problem on an internal forum. An AI agent responded with a solution, which the employee implemented – causing a large amount of sensitive user and company data to be exposed to its engineers for two hours.

lol and - furthermore - lmao

https://www.theguardian.com/technology/2026/mar/20/meta-ai-agents-instruction-causes-large-sensitive-data-leak-to-employees

Meta AI agent’s instruction causes large sensitive data leak to employees

Artificial intelligence agent instructed engineer to take actions that exposed user and company data internally

The Guardian
@davidgerard this is how The Facebook was made, business as usual

@davidgerard a friend of mine caused an incident at fb when he removed an incredible amount of duplicated vendored code ostensibly because they have an ML-based packaging tool that suddenly failed in response to a much smaller input. one issue with vendored code is that changes to it are not really detectable; the second issue is that you can't update it for security fixes.

i mention this because facebook has very frequently spoken of how security needs to be the default and tooling built to make it easier to write secure code; sure, it's facebook, perhaps best to ignore that. but there should be no way a single change makes this possible in the first place. twitter was under a 10-year FTC consent decree for failing to sufficiently protect user data (they lied about this to their engineers). accessing user data is not something a single code change can achieve unless user data is already visible to insufficiently permissioned services.

the point is this sounds like a great thing to leak to the press if you believe your sneaky code path is about to get burned by a whistleblower. it also serves as an explanation to their own employees. stochastic parrot can't generate a cryptographic key and any security engineer would know this. what this does say is that the regulatory environment is sufficiently dead in the water that they feel safe to leak criminal neglect to the press.

@davidgerard i mention vendored code because google does the code vendoring too and it's an easy way for someone to hide vulnerabilities from auditors as well as their own employees, which is one plausible interpretation of this leak

@hipsterelectron I feel like you may have buried the lede in this post..

@davidgerard

@davidgerard and let me ask you, who wears the risk, liability, and consequences here given the corporate push to use AI?

I hope the employee doesn’t suffer any consequences (above the background radiation of consequences any Meta employee should suffer).

@davidgerard I wonder if that $64 million to boost election candidates against the regulation of AI seems like such a good idea now, Mark. 🤔
@JustinMac84 @davidgerard sure, because now fuck ups have no consequences for them...
@davidgerard My work banned me from agentive AI because I know too much... they are scared something like this would happen and they are right.
@davidgerard
One wonders whether the engineer knew in advance that the response was non-human?
@AlisonW @davidgerard If not, it seems very much like we've made a silicon version of The Thing. And are now trying to get it to run everything, with predictably disastrous results.
@Soozcat @davidgerard
It seems to me that you have made an entirely accurate statement of fact. 😥

@davidgerard

The *Now how much will you pay for crowd that seems at least within Microsoft to be experiencing austerity because tokens cost too much

@davidgerard
What could go wrong🤭

@davidgerard

> “The vulnerability would have been very, very obvious to Meta in retrospect, if not in the moment. And what I can say and will say is this is Meta experimenting at scale. It’s Meta being bold.”

No, it's Meta being very, very stupid. A company that deploys agentic AI, *knowing* its limitations, without safeguards is not "clever" or "bold". It's reckless and stupid.

#AI #AgenticAI #Meta