When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.
Thank you to everyone saying "it's still the human."
No, it isn't. It's product deployment without any concern for security or impact. This is the equivalent of suggesting every customer catch a falling knife, for their own benefit.
This is nondeterministic, autonomous malicious enablement, and we cannot blame the user as much as I'd like to.
I'd say it's still a human. But it's not the user, it's the product deployer.
In my worldview, responsibility always, and only, lands on humans
@neurovagrant Why do you surrender agency so readily?
We are and remain masters of our world.
So much of the slopocalypse is shitty CEOs catering to dumb investors who arrogantly yet wrongfully think they know a damn thing about IT. All a very (if deplorably) human thing.
That said, your post is funny and I like it a lot.
@phil @neurovagrant
I don't.
I'm a stimulus-response machine. I'm governed by the laws of physics exclusively.
@phil @zaire
Fuck it, I'm thirsty, so if you'll join me and my neck beard at the "well, actually"
> Metaphor:
> A figure of speech in which a word [] that ordinarily designates one thing is used to designate another, thus making an implicit comparison, as in βa sea of troublesβ []
It is a metaphor. A computer program is not a stick one uses to support their body weight to supplement the functionality of their legs

Abstract Sentiment analysis of open-ended survey responses is a complex but essential task in understanding public opinion. This study compares the performance of three large language models (LLMs)βGPT-4o, Llama-3.3-70B-Instruct, and Gemini-2.0-Flashβagainst dedicated sentiment classification neural networks, specifically Twitter-RoBERTa-base and DeBERTa-v3-base-absa-v1.1. Using survey data from two COVID-19 studies, we evaluated these models based on accuracy, precision, [β¦]
@phil @neurovagrant @EndlessMason you have to be smart enough to do the job without AI to be able to use the current generation of AI effectively and safely.
But that's not how it's being sold, and that's not how executives see the situation
Which means this whole mess isn't an end user failure (oh, if only the end users were smarter and more attentive, BUT THEY"RE NOT)
It's a management failure (not understanding their workers, and not understanding the tools they are making their workers use).
Turns out the weakest link was just waiting for a better prompt.
It's still a human, it's just shifted to the decision-making ones that mandate use of these systems.
The weakest link is the human who signed off on the LLM
... The "Leader-shit" team that went all in on LLM's?
@neurovagrant I took love how we have made computer suspectable to social engineering.
Great job all around guys
(Sarcastic)
Its crazy how little of in issue it would be if
1) AI CEOs weren't greedy about training data. So the bots wouldnt siphon corprate and private data to use as training data.
2) Openai wouldn't have a feature to make chats visible on the internet.
3) Microsoft didn't make a folder filled with screenshots of EVERYTHING YOUVE EVER DONE.
And most importantly
4) We stopped giving LLMs full fucking access to our computers, networks, and credit card information.
Like there's absolutely no reason for them to be such a security risk. These are all things that if they just asked one person who isn't sniffing a Tech CEOs farts all day their opinion.
Now we have assholes like Pete Hegseth trying to super glue ChatGPT to a tomahawk missile!