Autonomous agents designed to follow "instructions" regardless of source. So there is really no defense against "agent (command) injection" attacks.
đą đą đą đą đą đą đą đą đą đą đą
Just in case you missed the LinkedIn Speak translator...
https://translate.kagi.com/?from=en_gb&to=linkedin&text=let%27s+go+
@neurovagrant
They distrust humans due to their fallibility and potential ulterior motives, while they believe 'AI' to be an objective machine.
It's a weird situation where they both anthropomorphise algorithms by ascribing intelligence and intent to them, while at the same time relying on the fact that they're algorithms as a reassurance that they are objective mathematical and logical tools.
It's cherrypicking the best of both worlds â simultaneously supposedly thinking and infallible.
@neurovagrant 'Zero trust' has always been about potentially nefarious human intentions and sabotage, and since so-called 'AI' cannot have intentions and are supposedly merely doing what they're told as they are programs, they are considered inherently trustworthy.
The problem is that they think of 'AI' in terms of a traditional program: we know what it does because we programmed it, so it cannot do anything it's not supposed to do, unlike a human.
They ignore the black-box nature of 'AI'.
The original Zero Trust paper said, basically: assume endpoints are compromised. Design your system such that a compromised endpoint won't doesn't impact your global security. It rapidly became: massively increase your attack surface by running a load of privileged code on every client that doesn't actually have the ability to make strong security claims and, if that code claims the device is compliant, treat it as completely trusted.
There's a reason I assume TRUST in Zero TRUST is an acronym for 'Thinking Rationally, Understanding Security and Threats'.
Our CEO used the words "zero trust" and "agentic AI" in the same sentence - as examples of what we are all-in on. It was a public event.