@neurovagrant Trust No one was on seemingly every poster.
What happened?!
Autonomous agents designed to follow "instructions" regardless of source. So there is really no defense against "agent (command) injection" attacks.
😱 😱 😱 😱 😱 😱 😱 😱 😱 😱 😱
@neurovagrant y'all remember the old demotivator memes from the early 2000s?
One of them read "None of us is as dumb as all of us." LLMs are non-deterministic balls of shit put together with the absolute dumbest takes from Reddit and StackExchange, thrown into that non-deterministic blender, to shit out what is probably the most awful code known to man. None of us is as dumb as all of us.
@da_667 oh my god, i remember those.
it came to pass. this can't be good.
@neurovagrant @da_667 I mean, there's still an outlier on the stupidity front.
The problem is they're the dictator with the 'most powerful military in the world.'
@rootwyrm @neurovagrant The minute his obituary is announced I'm just going to post the first sentences you see in the game, Brigador:
Great leader is dead.
Solo Nobre Must Fall
Welcome, Brigador
@da_667 @neurovagrant That one was Meetings from https://despair.com/collections/posters/products/meetings
I have a big lithograph of Idiocy (a ring of skydivers captioned “Beware the power of stupid people in large groups.”).
For a while, they sold a shirt labeled “Insecurity”.
Just in case you missed the LinkedIn Speak translator...
https://translate.kagi.com/?from=en_gb&to=linkedin&text=let%27s+go+
@neurovagrant
They distrust humans due to their fallibility and potential ulterior motives, while they believe 'AI' to be an objective machine.
It's a weird situation where they both anthropomorphise algorithms by ascribing intelligence and intent to them, while at the same time relying on the fact that they're algorithms as a reassurance that they are objective mathematical and logical tools.
It's cherrypicking the best of both worlds – simultaneously supposedly thinking and infallible.
@neurovagrant 'Zero trust' has always been about potentially nefarious human intentions and sabotage, and since so-called 'AI' cannot have intentions and are supposedly merely doing what they're told as they are programs, they are considered inherently trustworthy.
The problem is that they think of 'AI' in terms of a traditional program: we know what it does because we programmed it, so it cannot do anything it's not supposed to do, unlike a human.
They ignore the black-box nature of 'AI'.