ssekret

@ssekret@infosec.exchange
6 Followers
100 Following
413 Posts
A very “surprising pattern” that people don’t want to use fucking shit that doesn’t fucking work and depends on stealing people’s work and fucking lighting the mother-fucking planet on fire while feeding their fucking money into the greedy throats of billionaires.
@joyousjoyness kurwa bober!
@ianjs @aren @nixCraft too bad sites are overriding / keybind like in case with github

That study that shows that developers think they save 20% of time when using coding LLMs, but they’re actually taking 20% longer is so funny.

Not only does the AI bullshit the developers—the developers also bullshit themselves.

Cats are smart creatures. We can learn a lot from them. Credit: @fuckworkmemes #anticapitalism #eattherich #feedthepoor #cat #catmemes #activism #cativism #anarchism #socialism
Imagine if we collectively spent just 10% of the resources that go into the AI bullshit hype machine on solving, you know, the myriad of existential problems humanity faces.
@ownlife I was in the zoo, there were even more people than I expected
All I want for Christmas is for the AI bubble to burst.
He was as untrustworthy as a mobile platform game ad.
×
A very “surprising pattern” that people don’t want to use fucking shit that doesn’t fucking work and depends on stealing people’s work and fucking lighting the mother-fucking planet on fire while feeding their fucking money into the greedy throats of billionaires.

@thomasfuchs I can relate to this sentiment, but the paper refutes that "perceived ethicality" (or "perceived capability") explains the discrepancy.

https://doi.org/10.1177/00222429251314491

@jedbrown @thomasfuchs And they put it down to those with lower literacy "perceive AI as magical and experience feelings of awe". The paper is paywalled, so I cannot see how they distinguish between one person thinking the AI is magic and another not and "differences in perceptions of AI's capability".
@jedbrown @thomasfuchs
I can't access the paper but I do wonder how they determined the literacy levels of participants

@feff @jedbrown @thomasfuchs

I imagine just asking people how they think LLMs work. Those who answer 'it's a bullshit engine built off an empire's worth of stolen goods' would, by my standards, receive top marks.

@jedbrown Or, they prefer people who lack higher literacy, and are more likely to use AI to make themselves look smarter. Less ethicality. @thomasfuchs

@jedbrown I wonder if higher literacy rather correlates with “if I want something done properly I’ll do it myself, and I know how to do it”.

@thomasfuchs

@jedbrown @thomasfuchs Yeah, I think its really just: The more you know about AI, the more you know its present limitations. And you are less suspectible for the hype of future AI and more realistic about what to expect. Which isn't bad at all, even quite promising. But far away from threatening the experience of skilled workers.

Maybe exceeding the management skills of certain CEOs.

@urwumpe @thomasfuchs Note that it doesn't have to deliver on any technical capability to be effective pretext for union busting. All it takes is that investors think it's close enough relative to their love for making labor vulnerable. The industry-wide pivot into defense and surveillance is part of the same phenomenon, with even less accountability for fitness for purpose.
@jedbrown @thomasfuchs I tend to disagree slightly there, I think AI isn't cost effective in too many cases to be even worth risking union busting. But of course, the rest of the world isn't as protected as old Europe (but we have other problems). And of course, even bad CEOs get rich in most places.
@jedbrown @urwumpe @thomasfuchs Yep, that's the critical piece. It doesn't need to be capable or for for purpose. It only needs to be accepted by the masses.

@winterayars @jedbrown @thomasfuchs on the other hand, it was also accepted the robots weld cars, instead of letting humans do that. Did not stop my union to become more powerful then ever since. Or resulted in a smaller workforce.

Technology can also lead to more meaningful work and better working conditions. Maybe we should not let CEOs and libertarians decide, what AI can be good for?

@urwumpe @jedbrown @thomasfuchs Machines saving on human labor is amazing.

Machines being used to cut everyone out of profit, condensing power and control of society into fewer and fewer hands, is not so great.

@urwumpe @winterayars @thomasfuchs Imagine the "welding machine" doesn't actually weld, but instead smears putty in the joint so it looks like a weld, and the welding machine company has enough influence in government that neither the welding company or the car manufacturer is liable when the cars fly into pieces on the highway. Somehow that spontaneous disassembly happens much more frequently to immigrants and queer people and brown people.

@jedbrown @winterayars @thomasfuchs Sorry, but I am not that religious. I don't believe in miracles, not for me, and not for the bad guys.

Can't you find a more grounded way how AI can separate and discriminate people, if put into the wrong hands?

@urwumpe @jedbrown @winterayars You are committing a logical fallacy in presupposing that “AI” actually exists. It doesn’t.

LLM transform tokens into other tokens; without knowledge, intelligence, insight, creativity or even awareness of what it is doing.

It categorically cannot have broad applications.

@thomasfuchs @jedbrown @winterayars What do you think, intelligence actually is, on the lowest level? Your brain is made of the same building blocks as the brain of a shark. But you would never say a shark is intelligent compared to a human, would you?

AI has maybe just reached the intelligence of a parrot. Not impressive to a human.

But AI is no intelligence bound to a body, not a product of nature. It can be unexpected, exotic, alien... but its a kind of intelligence.

@urwumpe @thomasfuchs @winterayars No, that is a religious position you are taking.

And note that "stochastic parrots" was not referring to the animal, but the verb "to parrot" (as the authors have clarified personally).

@jedbrown @thomasfuchs @winterayars I hope we just speak different languages in the best case. Anyway, lets stop here.
@urwumpe @winterayars @thomasfuchs You're asking me to shoehorn that into the welding metaphor or are you seriously asking how "AI" systems discriminate when there is a decade of well-publicized critical literature and new instances come out every day? The systems automate discrimination even when following all the best practices in "fairness", without corporate capture, when the community is involved throughout.
https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Inside Amsterdam’s high-stakes experiment to create fair welfare AI

The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

MIT Technology Review
@jedbrown @winterayars @thomasfuchs You don't want to turn this into a good discrimination vs bad discrimination debate, do you?
@jedbrown @thomasfuchs I find fascinating the explanation (from the abstract): "the lower literacy–higher receptivity link is mediated by perceptions of AI as magical and is moderated among tasks not assumed to require distinctly human attributes."
And the final recommendation is chilling and a good support for went capitalism must be superseded: "These findings suggest that companies may benefit from shifting their marketing efforts and product development toward consumers with lower AI literacy. In addition, efforts to demystify AI may inadvertently reduce its appeal."
@precariousmind @thomasfuchs Yeah, I had the same reaction to that McKinsey-esque language of damaging society to move product.
https://hachyderm.io/@jedbrown/114838478539079389
Jed Brown (@jedbrown@hachyderm.io)

Attached: 2 images This appears in Journal of Marketing and so some of the writing is 🙃 dystopian, suggesting that effective strategy is to intentionally promote misconceptions and keep the low-AI-literacy audience ignorant. I prefer to read that as a warning rather than an instruction manual. https://doi.org/10.1177/00222429251314491

Hachyderm.io
@thomasfuchs Mystery: Why don't the experts share my opinion, which is obviously correct?
@andreaslindholm @thomasfuchs Real "Nobody knew healthcare was complicated." energy from the paper.*
@andreaslindholm @thomasfuchs That has been the question which has baffled me for decades. 🤷🏽‍♂️ Obviously, you and I are in agreement.
@andreaslindholm @thomasfuchs There’s a lot of that going around these days.

@thomasfuchs I commend the restraint you showed when composing your post.

However, I suggest an edit as you seem to have missed out a couple of "fuckings".

I think that your last sentence could be amended in this way:

"...while feeding their fucking money into the greedy fucking throats of fucking billionaires."

@the_wub @thomasfuchs maybe it was a character limit constraint?

@the_wub It's the last couple of sentences of their abstract in particular that incited my outburst of swearing:

"These findings suggest that companies may benefit from shifting their marketing efforts and product development toward consumers with lower AI literacy. In addition, efforts to demystify AI may inadvertently reduce its appeal."

@thomasfuchs

@thomasfuchs
Doesn't work and its expensive? Sign me up !

@thomasfuchs

In a surprising pattern, we noticed that people who understand a pattern are less likely to be surprised by it.

@thomasfuchs which means: it will be a runaway success because a majority of people won't care.
@pjakobs You'd think that, but many companies have already ditched it. When the power it consumes starts to put their domestic supply into jeopardy, maybe even stupid people won't want it. @thomasfuchs

@Tooden

I believe currently, most AI use is fueled by one of two things:
a) the feeling of people who don't understand the complexity of the field they are trying to do something in that they can now and
b) the fear of missing out

@thomasfuchs

@pjakobs The second of those is a strong factor, for sure. @thomasfuchs

@thomasfuchs

The naivety of academic research is continually surprising.

@thomasfuchs not that surprising really
@thomasfuchs I feel vindicated by this. Did old school machine learning with hand coded backpropagation on small neural nets and even older stuff. Before the heyday of Tensorflow and its successors.
I think I have a fair grasp on the limitations of LLMs and that's why I am so appalled by how uninhibitedly a lot of people are using them for stuff LLMs have no business of being used for.
@Tom_ofB @thomasfuchs It is like magic. Once you know how the trick is made, it is not impressive anymore

@hugoestr @Tom_ofB @thomasfuchs

Not for me : I am more admirative of the magician's skills when I discover the trick.

@thomasfuchs Same energy as this evergreen classic. I am happy more people are entering the finding out stage of this current hype bubble.
@frost @thomasfuchs Same in this house, except I'm channelling Ash in his "Get an axe" phase 😊
@frost @thomasfuchs Was coming here to post this. 🤝
@thomasfuchs IA is a bit like "boudin noir" or "fricadelle" : once you know how it's done it tastes a bit different.
@thomasfuchs you forgot "fucking" before the word "billionaires"
@thomasfuchs In the immortal words of my cousin, “I wish stupidity only harmed the stupid.”
@thomasfuchs in a private work chat recently I described AI as being "sold to people who don't understand it, by people who don't care about the truth" and I stand by that assessment.
@thomasfuchs Replace "AI" with "Blockchain" and it's literally the same.

@thomasfuchs

AI is a a giant scummy planet damaging money sucking scam.

Of course the persons who are not making money with it and understand how it works are skeptical or horrified by its intrusive growth and integration.

Present AI is definitely not Intelligent, more Bullshit.

@thomasfuchs
I feel like "the more people know about it the less they like it" should be a pretty clear market signal for any product.
@thomasfuchs I tried to use Duck.ai to identify a plant I had seen - it was a waste of time. The thing has no understanding of taxonomy, it only knows names and associations.
@thomasfuchs @pluralistic 💯🙏🤩 it’s utter crap.
@thomasfuchs

Got new phones, recently (batteries were dying on our 2021-purchased ones). Discovered that the power button was mapped to the AI assistant function, by default. That's the same assistant that pops up any time you accidentally say it's wake word so why also map it to the power button??? Fixed that mapping this morning.

Even as someone who has to suppot customers that want to roll out AI, it's a huge nope in my personal life.