@neurovagrant
They distrust humans due to their fallibility and potential ulterior motives, while they believe 'AI' to be an objective machine.
It's a weird situation where they both anthropomorphise algorithms by ascribing intelligence and intent to them, while at the same time relying on the fact that they're algorithms as a reassurance that they are objective mathematical and logical tools.
It's cherrypicking the best of both worlds – simultaneously supposedly thinking and infallible.
@neurovagrant 'Zero trust' has always been about potentially nefarious human intentions and sabotage, and since so-called 'AI' cannot have intentions and are supposedly merely doing what they're told as they are programs, they are considered inherently trustworthy.
The problem is that they think of 'AI' in terms of a traditional program: we know what it does because we programmed it, so it cannot do anything it's not supposed to do, unlike a human.
They ignore the black-box nature of 'AI'.