⚠️ Asking to get roasted 🚨 well, sorta

Please tell me why you hate AI.

Cosmic brownie points if it's not because of the environment, data privacy, or copyright. Those are all super valid reasons, but I've heard them before.

I'm looking for haters, please step up to the plate.

Get vicious, show me the hate.

@dnsprincess I hate AI because rich people believe Roko's Basilisk is a real thing, and are pushing AI to be "saved" like Roko's Basilisk is the Christian God.

@dnsprincess Large language models are an interesting technology that deserves academic research.

As a "product" it just doesn't work. All the promise of what these glorified chatbots could do has not materialized - and as study after study shows - it's not getting closer ("newer models tended to perform worse in generalization accuracy than earlier ones." - https://royalsocietypublishing.org/doi/10.1098/rsos.241776 ; other similar claims of RLHF reducing the quality of the outputs abound).

@mzedp @dnsprincess
Adding on to "it doesn't work for me" my new favourite (pre-pub) paper "Large Language Models are Unreliable for Cyber Threat Intelligence" https://arxiv.org/abs/2503.23175 (nb I work on CTI for $dayjob )
Large Language Models are Unreliable for Cyber Threat Intelligence

Several recent works have argued that Large Language Models (LLMs) can be used to tame the data deluge in the cybersecurity field, by improving the automation of Cyber Threat Intelligence (CTI) tasks. This work presents an evaluation methodology that other than allowing to test LLMs on CTI tasks when using zero-shot learning, few-shot learning and fine-tuning, also allows to quantify their consistency and their confidence level. We run experiments with three state-of-the-art LLMs and a dataset of 350 threat intelligence reports and present new evidence of potential security risks in relying on LLMs for CTI. We show how LLMs cannot guarantee sufficient performance on real-size reports while also being inconsistent and overconfident. Few-shot learning and fine-tuning only partially improve the results, thus posing doubts about the possibility of using LLMs for CTI scenarios, where labelled datasets are lacking and where confidence is a fundamental factor.

arXiv.org

@dnsprincess I hate the dumb, useless work it's making me do.

I've got a couple of junior engineers that I'm leading on the technology front. Multiple times now I've had to review the AI slop they've tried to use as documentation or change controls and reject it because it's wrong, useless, or incomprehensible.

"But why do I have to redo it?"
"Okay, explain how you're gonna do step 5."
"Uhh..."
"Yeah, your change control includes steps for things we don't have."

The root cause is a people problem, but AI slop is a symptom in the same way that food poisoning is the disease but I'm not any happier about the vomit on my shoes.

@dnsprincess Because for most uses it's fundamentally a stopped clock: it might be coincidentally right twice a day, but it's still useless as a clock, because you can't know when it's right, and if you had a way to reliably tell, why are you "asking the AI" in the first place?

It's "confidently incorrect" frequently enough that it's impossible to tell without extensive cross-checking, and the effort to cross-check is frequently as much or more effort than doing it the "old-fashioned" way.

@dnsprincess You know how when you were a kid, when you didn’t know something, you’d as an older sibling/cousin/relative, and they’d just make up some bullshit that you didn’t know was bullshit but sounded plausible?

It’s that.

It’s nondeterministic - you don’t get consistent responses, so it’s useless for any task that needs to be performed consistently.

It’s that we lose crucial skills when we do t practice the manual way of doing things at least once in a while https://public.milcyber.org/activities/magazine/articles/2023/draeger-the-autopilot-problem

It doesn’t take feedback the same way a learning human does. If I commission some artwork, the artist will send me an in-progress sketch. I can red circle any areas that need changing, maybe sketch in a suggestion, and the artist can work with that! It takes the work it’s already done and makes those changes and moves forward. GenAI doesn’t do that, it’s generating totally new images every time. So it can’t “tweak”.

MCPA - Draeger - The Autopilot Problem

The Autopilot Problem By Amanda Draeger April 25, 2023

@dnsprincess The app SUNO showed me I can't tell the difference between modern country/pop and AI generated ones.

Wait, who are we roasting again?

@badsamurai @dnsprincess I really need to start writing country pop so I can finally get paid

@dnsprincess This isn’t really what you asked for but I’m a white middle class American male so I feel the need to interject my answer anyway.

I don’t hate AI despite so many of our colleagues hating it.

The environment, data privacy, and content theft (I’m mostly anti-copyright already) is all I really have problems with.

I actually want Fully Automated Luxury Gay Space Communism and I think what we’re doing is steps towards that.

@megabyteghost the phrase "Fully Automated Luxury Gay Space Communism" makes my heart sing somehow ✨
@dnsprincess it’s my ultimate goal

@megabyteghost @dnsprincess LLMs could be a step towards that if we had UBI. Real machine intelligence would be a bigger step, as long as we had UBI.

As it stands today, LLMs are being used to drive people out of work. This increases the number of unemployed people competing for job openings, which means companies don’t have to pay people as much. The only reason companies are investing in them is to oppress labor.

This was most starkly evident with the WGA strike.

@bob_zim @dnsprincess agreed. A UBI is probably the most important thing to do first.
@dnsprincess the more people trust AI, the less they trust themselves; it gradually makes you a weaker and more dependent person
@dnsprincess I can give a junior dev an atomic wedgie, but I can't give one to the AI model

@dnsprincess I asked Microsoft copilot a question about a system event id, knowing that the old event id was depreciated with Windows 8.1. Iit gave me the depreciated event id.

I asked google gemini for help building a chronclie rule. It got the syntax wrong.

This is two examples of using the vendor's AI to answer a question about vendor systems, and both times they were wrong.

AI doesn't give you the right answer, it gives you the popular answer. It's Ask the Audience, but the audience has severe cognitive impairments.

@infoseclogger @dnsprincess I just had to deal with intern interviews.

All three interns were from schools with very well-known cybersecurity programs. Depaul. NYU poly. Carnegie Mellon.

We gave then a take-home quiz. All three responses we got were so similar, at first I thought they all collaborated. But that was stupid. Then I started asking Copilot and ChatGPT the questions we gave them.

It was more or less the same, even including getting the rule syntax wildly wrong in some cases.

@infoseclogger @dnsprincess sorry to piggyback on this thread, but to answer your question...

It's okay to say you don't know the answer to something. I've been in cybersecurity for over a decade and I still miss things frequently. You learn and you grow..

I need people to understand, especially university students right now trying to enter the job market, I need you to tell me when you don't know how to do something, instead of letting the bullshit machine answer for you.

If you used the bullshit machine, and its hilariously wrong, as it often seems to be, I don't want to have to ask WTF happened. I want to be able to trust my co-workers, and help them, if necessary, and if they are willing to be helped.

Snake oil salesmen are selling AI to all of us as an inevitability, but its been YEARS so far, and it still gets things horrendously wrong. Even with stealing copywritten works, and most of the body of the (unwilling) internet. Instead of using the bullshit generator, USE THE DOCUMENTATION. USE THE COMMUNITY SITES.

Truth in the Age of Mechanical Reproduction

LLMs are scary, but not for the reasons you've been told.

@dnsprincess

Because it's World Goth Day, and Fuck AI.

@dnsprincess the slop. so much slop. nothing can be trusted anymore. even ignoring the AI answers on the search result, so many of the actual pages that come up are rambling, incoherent LLM nonsense.
@mrsbeanbag @dnsprincess Seconded. LLMs have largely automated creating junk “content” to sell ads. It has been coming so much faster that it’s drowning out all the real information. The problem is especially bad when looking for anything remotely exotic. There are 20+ junk pages for every page with a correct explanation or answer.

@dnsprincess

AI isn't useful. When it succeeds, it's by accident/coincidence. What is being sold as AI is in fact a ton of half-baked ideas and promises that have yet to see meaningful delivery to anything other than authoritarian systems. It's hype that empowers the worst of our society to dole undue and innumerable harm to everyone else.

It's not designed to do it's job well, if we presume it's job is to perform a task. It's intended to Present as useful, but it never measures up to any degree of scrutiny.

The primary thing that AI is, is:
- A hype engine to capture venture capital
- A tool to provide to the ruling classes to intensively devalue the labor of individuals

It's a bubble with no sellable product, to quote from someone else, 'A gold rush where there hasn't been any gold but a lot of people selling shovels.'

We are sitting in front of the world's wealthiest men who are telling us that their product will dramatically change society for the better, all the while pushing to displace people from their employment and replace them with a bot that can't even complete a children's video game.

- There's no profitable end to this venture, it's a $600bn (so far) money pit that everyone is pouring other people's money into
- The products being released are not delivering on what was promised, with everything packaged as "It will be able to' or 'In later updates' etc, all just a series of big lies about the technology to keep the investor money flowing
- The end outcome is there will be a few small models that can be used for some minor data processing tasks, but the long term doesn't include these companies massive LLMs doing anything but poisoning the well of information on the internet to deliberately drive people into walled gardens to find information, information that will be tightly controlled and delivered by the platform (see: Google's recent AI only search, after spending the past 7+ years making regular search bad to push more ads)

The problem with AI isn't the tech. It's the cult of personalities that have come around it, hailing it as the best thing since sliced bread, all the while pedaling a Racist Linear Algebra Machine.

Not a single promise has been met. No significant changes have happened without immense cuts to the labor force with the replacement resulting in worse service across the board, with the big names actively pushing the government to legislate their platforms into permanent fixtures of our society and operating entirely in contradiction to the will of the Market and will of the End User, because at the end of the day, the Customer for LLMs is either:
- The rapidly fascisticizing governments who want it as a tool of public persuasion and information control
- Nominally Literate investors who are caught up in a pipe dream of promises and not wanting to miss out 'just in case'. Think Pascal's Wager, but for investors not wanting to miss out.

There isn't a single useful thing that LLMs can be reliably used for without massive caveats and massive amounts of human intervention to get something usable in the final outcome. As a technology, it's at best used for *some* translation tasks, and even then it comes with strings attached. When the bubble is popped, it'll result is a massive economic downturn that did not have to happen at all. But thanks to the Silicon Valley culture of over promising and under delivering packed up inside an ideology of "effective altruism" that is currently and actively dealing massive harm to the Kenyan communities and other Majority World workers who have been left with PTSD and often times spontaneously have their jobs taken away with 0 recourse.

We will look back at this as a period of mass hysteria and psychosis and it'll be one of the most embarrassing times in human history.

@dnsprincess our architect has gone full koolaid and his team overseas has been generating several multiple page documents to support some new project that are either incomplete, wrong, or all fluff. They expect the documents to be used to provision the production copy of their bloated Claude generated project but if they were to follow their own bs docs it wouldn’t result in a working system. When being told this they went back to Claude and have been prompted to upgrade their account. The project is dead until they fork over money for credits that don’t give them any valid answers.
@dnsprincess All the CEOs are drinking so much Kool-aid they're gonna cause an international shortage

@WinNT4 They're already causing a shortage of clean water.

@dnsprincess

@dnsprincess This is pretty mild, but I hate how people are using it. I think savvy folks know its limitations but 95% of users will think they're talking to a sentient being, form relationships with it, and trust it to always know what it's talking about.

It's just a technology tool that can be used for good or ill. The problem exists between keyboard and chair.

@dnsprincess It has no soul, and therefore reminds me too much of government.
@dnsprincess asking a question now gets a response where someone copy-pasted my question into an AI and then copy-pasted the result into their response. The response is always wrong and I end up correcting them instead of getting helpful responses.
@dnsprincess It replaces facts and knowledge with made up bullshit.

@dnsprincess Because when I’ve asked it to do the things I wanted it to generate, text or image, it refused because the topics I chose were forbidden.

Free speech much? This iteration of “AI” LLMs are free speech inhibitors.

@dnsprincess so my thing is, I'm a weird mix of both early adopter but somehow still some kind of Luddite, I never remember it exists. I dislike it for the environmental reasons you mention and what's a clear link to fascism, that I've seen a lot of this even in my job, the higher-ups just want to automate everything. They'd love to ditch human workers.

But the reason I dislike it most, is probably just how phony it is. The weird, surreal uncanny valley crap, it just really speaks to something when you see it, that I can hardly describe. But yeah. I guess I just like real stuff, let me see those stretchmarks, etc. I don't want glossed-over, phony shit in my life, I never have. I want real shit. And that's what bugs me most about the 'AI' (can that count as something I hate about it? That we're all calling it 'AI' when it's not artificial INTELLIGENCE at all, really just autocorrect, the LLMs?) but I'm pretty sure the thing that bugs me most is the phoniness.

@dnsprincess I hate because it's essentially a bullshit generator. Like that coworker that has absolutely no idea what they are talking about but cannot admit it, so they start to make up stuff from thin air and say it with utmost confidence.

I double-hate it because the people who are getting obscenely rich from it pretend it does a lot more than it actually does to sell this bullshit to C-level folks.

I triple-hate it because the C-levels actually buy into this shit and act as if the fault that they are not getting their castle in the skies was from the dumb people who don't know how to use AI.

Basically in its current format it's a new type of SaaS: Stupidity as a (dis)Service.

@dnsprincess I hate AI because these dumb numbskull bots keep swarming 'my' OAPEN Library like locusts from hell, making it unusable and forcing me waste endless time and effort. Is that what you meant? 😀
@dnsprincess I’m actually out of work right now, so I hate that there are so many obviously scammy ai startups taking up space that could be used for actual real jobs in devops
@dnsprincess AI is like Elon Musk. People constantly misunderstand what it is they actually do and what their contributions are and think they are genius-level when they are very definitely not.

@dnsprincess Once we have AI I'll let you know.

Now, the products called "AI" exist to crush labor. They don't have to be good at all, just perceived to be good enough to allow the rich to force highly skilled people to accept work for a fraction of the money/benefits. The only people who need to be convinced LLMs can replace skilled workers are managers and shareholders.

@dnsprincess Dangerously inaccurate responses, combined with how the rich want to use it to cut jobs and impoverish us all.
@dnsprincess There is a monster in the forest and it speaks with a thousand voices.
@dnsprincess the worst kind of wrong is plausibly wrong, convincingly wrong

@dnsprincess Because it's an odious trap.
It is just good enough to make people think that they can use it as a lazy substitute for thinking skills for them to start depending on it and lose their own.

Even if it was in fact intelligent this would be hugely problematic, but as it is this is a recipe for disaster as its thoughtlessness will damage things it's directly used for and the thoughtlessness it induces in its victims will do indirect damage.

@dnsprincess It’s billed as a time saver, but if you take even more than a cursory glance at the output, it’s clear that what it generates needs a lot of human help. Which is sometimes more than just doing the thing.
@dnsprincess I deeply resent the fact that people want me to use it to turn the logical processes of programming and debugging into a ridiculous guess and check game of whack a mole because they think that it is somehow more efficient to come up with the magic words to cause auto-correct with delusions of grandeur to emit the correct syntax than to simply look up correct syntax.
@dnsprincess

I feel like it further commodifies things that people like doing (making art, programming, writing books/poetry, etc...) so then people start thinking those things don't have value because they can get them with a prompt, instead of realizing the value in these things is in the doing

Turns something creative into yet another consumption opportunity

Like, the summit is the excuse to climb the mountain. And now when I try to find fellow alpinists to talk about climbing techniques there's the noise of people that take the teleporter to the top and snap a pic.

"Why are you exercising for the ascent? Don't you know you can take the teleporter? It's the same summit either way, we both stood at the same spot"

yeah, but like, i have to live while i'm alive and i have to spend it doing something

and yeah that's still here but the noise to find the creatives in the consumers is raised and given the economic system we live under these people will be devalued and likely become more scarce
@dnsprincess humanity's curiosity to seek out answers and learn during the process is what has brought us to where we are today as a species. the AI promise (that hasn't fully delivered) is the opposite of that curiosity. AI is for idiots who just want the answer and don't want to understand WHY.

@dnsprincess Executives are all in on LLMs because they are a firehouse that sprays unpaid interns at a problem. Cheap, replacable labor that won’t talk back. But it’s an intern who never learns, who won’t set the office on fire, but there’s never an outlier worth hiring. Why bother? More interns!

But CEOs are so far up their own asses that they forget what the most privileged intern is there to do: kiss ass and drink coffee. And AIs can’t drink coffee. You can’t motivate a person who doesn’t exist, they’ll just read what the last kid did, hand in something plausible, and then piss off back to their rich parents’ weekend houses to blast music and do cocaine and wine coolers with their college friends forever

Chicago Sun-Times publishes made-up books and fake experts in AI debacle

A section in the Sun-Times included fake book titles and experts that appear to be made up. The outlet said online it was “looking into” how the content made it in the paper.

The Verge
@soundclamp @dnsprincess oh wow yeah. Saw this. It's just wild. No one checked it hahaha
@dnsprincess
(in addition to all that you've already mentioned...)
Because genAI models at this point are marketed as something that they aren't.
They're incapable of distinguishing right from wrong but mostly go off of statistical prevalence yet are marketed as wise.
If one hundred pages are generated with the same wrong info, AI will take them to be more likely for answering than 10 differently phrased pages with correct info.
The humanisation of AI is imo also an insult to humanity itself.
@dnsprincess
GenAI models don't hallucinate, they don't forget & they don't remember.
Every output is made up, some outputs just happen to be truer than others & there's barely a way to make sure.
No cognition takes place "within" (wherever that means) the models.
GenAI doesn't understand words for themselves but tokens of arbitrary length and therefore fails more or less depending on the language. (Although I admit my knowledge on that may be dated.)
Humanising AI dehumanises humanity.
@dnsprincess
GenAI is a technology and like all technologies their moral surroundings depend on its users.
GenAI is currently used to drown out scarce truth by mass-generating bullshit.
Torrents of fake news websites are accessible for free whereas genuine journalism needs pay walls in order to fiscally survive because both public & private actors would rather fund AI data centres than journalists (or combating global warming &al).
Every executive who decides to implement genAI is guilty.
@dnsprincess
GenAI is alluring but also clearly running from the law.
It likewise couldn't be clearer that private genAI companies are currently trying to reel in users only to later bring advertisers on board, perhaps when the law finally catches up, in order to survive at which point free tiers will be loaded with ads and prices for ad-free tiers raised astronomically but @ that point users will have acquired such learned helplessness i.e. dependency that they got barely any choice but to pay.
@dnsprincess
I admit the point just before is speculative so far.
GenAI models as they're currently on the Market™ seldom convey any estimation of certainly in their answers. Inquiring with genAI models on any safety related topic will get you served with an output ringing of certainty or neutrality at best but the heavens alone know whether the output is true at all and nothing indicates possible uncertainty as there's no reference against absolute truth, not even relative truth but statistics.
Michael | VoltPaperScissors (@VoltPaperScissors@chaos.social)

Attached: 2 images Let me introduce you to the hilarious world of AI-generated origami art instructions. These instructions are impossible to follow, often with incorrect step numbering. While it's entertaining, its a perfect example of the challenges real (origami) artists face now, as they have to compete with these AI-generated fakes. The pictures below were generated by me. I have a Pinterest board of the most hilarious examples I found if you want a good laugh: https://pin.it/38pSNWSp3 #ai #aiart #origami

chaos.social