I just read through a new, detailed, seemingly statistical rigorous academic study researching why people don't use AI. Its conclusions on this question are not unreasonable, and while I don't agree with all of them, there are salient points.

I am not going to give this study any direct oxygen by linking to it or naming it specifically. Because in my personal view, its entire purpose is nightmarish.

The ultimate focus is obviously on one key aspect, what do the Big Tech Billionaires need to do to suck EVERYONE into using these horrible systems. The entire study -- again, in my opinion -- is an effort to create a roadmap for Big Tech to "adjust" what they're doing to undermine the concerns non-users of the tech have, and seduce them into becoming addicted users instead.

There was no sense that not using generative AI is a valid choice, rather it's seen as an aberration that needs to be eliminated.

I found the entire study to be both interesting and disgusting.

L

Basically I read the study as saying something like: "For some reason there are people who don't like to eat dog poop. We need to find a way to adjust our dog poop so that those people will be willing to eat it." More or less.
Elias Mårtenson (@loke@functional.cafe)

What the absolute fuck? I quote: > "AI can’t transform your organization if your employees won’t use it. Our latest whitepaper, “Navigating the Human Factor,” explores why employee resistance is one of the biggest barriers to AI adoption—and what leaders can do about it."

Functional Café
@loke No, this isn't a white paper, it's a multi-author academic study.
@loke And it relates to the entire global population, not just employees of firms.
@lauren that does not make me feel any better about this at all. How far does the brainrot spread?
Cults gonna cult, only question is how to take power back from them, and leave them squealing for attention without armies of construction workers piloting unstoppable monster machines carving their way across the landscape.

Really, it's as simple as cutting off their gas.

@cy @lauren

AI data centers are big and give off lots of heat. They are probably easy to find and wreck.

Just fire a heat seeking missile in their general direction!

Wait but no then the oligarchs will start building them inside active volcanoes.

CC: @lauren@mastodon.laurenweinstein.org
@lauren
May I ask you to reconsider, not sharing this study? The people who work hard to suck people into Gen AI will read it anyway, passing it along by word of mouth. And it's difficult to build social defenses if you don't know what to look out for and be wary of.
@datenwolf I don't think there's anything in there of positive value in terms of defending against this. It is essentially is a roadmap to undermining the concerns that most of us already know.
@lauren
Well that much I understood. Which is why I want to read it, so that I know the roadmap – in order to know where to place the roadblocks or maybe even turn parts of the path into a moat proactively.
@datenwolf @lauren I’m interested too, because my work is pushing AI everywhere and it’s annoying. Were quality assurance professionals for gods’ sake. How is it useful if I have to 100% verify everything? What does this save me?

@lauren @datenwolf thanks! I hate it!

And I think you were right, not a ton useful here to share! I asked but you were right! It looks like a big survey to find out what seemed fairly obvious. “Addressing concerns regarding output quality, ethics, and human connection can significantly enhance the effective use of these technologies” - wow no kidding, more people might use it if it weren’t 1000% plagiarism?

I personally don’t care for it at work because while it’s fast maybe, you can’t trust output but also deskilling is a valid concern. If my junior team members use chatGPT to write things, how will they learn to write well? And edit? And critique?

I also think it’s watering down everything. If it all becomes averages of averages of averages, everything we write and read will sound the same and repetitive and content-neutral. I want to see new thoughts and nice ways of presenting them, not complete uniformity.

I am TERRIFIED that people think they should use AI for decision-making. What a bad idea. Yet in a work meeting our leader asked what would you like an AI for and a peer said to decide xxx thing that is a very hard decision and influenced by a ton of things. While maybe it could error check, compare vs regs, there is No Way it can make decisions.

Thanks for sharing.

@lauren @datenwolf plus the vampiric energy use at a very bad time to be increasing electrical demand. Vampiric energy use to get garbage.
@3janeTA @datenwolf You're welcome. And I DID warn you!
@lauren @datenwolf like why write this? Hmm people don’t like kicking dogs, I wonder why?
@lauren @3janeTA Thank you for sharing this link. However, I think there's a couple of important things to take away if you "read between the lines" so to speak. First of all: Why are the two areas of main concern (connectedness and ethics) the only two with ridiculous low p-values? p<0.001 in a study with ~200 people would mean, they'd unanimously agree on that, in both measures. Then the wording: Heading was "concerns", but in the text they used "fear" and "anxiety" thereby pathologizing. 1/

@lauren @3janeTA
Then there's the conscious decision not to include effects on spirituality in the analysis. I wonder why (maybe because in the past few months this aspect of GenAI went completely off the rails and causes some actual problems).

Furthermore, this paper is part of a conference proceedings volume, which means there was an associated oral presentation, or at lest a poster. Did this spur discussions?

2/

@lauren @3janeTA

The scenario that immediately came to my mind would be AI assistants to customer facing developers, nudging their work toward the goals of the organization. Think "developer tasked with implementing a dark pattern, having ethical concerns in actually going through with it." – There's a strong incentive to optimize away those hurdles.

@lauren
Is that the source of a screenshot I saw that framed all the reasons as "fears"?

@lauren Gotta ask these AI fanbois:
" How is AI going to help me pick mulberries, hand water my plants, or clean the cockatiel's cage? Can it cook lunch for me every day?"
No answer...?
Then I don't need it.

I am not against technology; I am perfectly happy recycling older laptops by loading Linux on them, I use the Bash shell and vim, and find computers wonderful for doing graphic layout.

I just don't understand how AI will improve my life or why I should use it. I am perfectly capable of doing deep dives in search engines on my own; and can write well enough to be understood. It seems like a lot of lazy people want computers/AI to do the work of "thinking" for them.😠 That is not how life works.

@lauren For me, ultimately, its about the attempt to centralise knowledge and culture into the hands of an unaccountable few. If the tech worked as claimed, the outcomes would be even more horrendous as barriers to adoption would be lower, so Sam Altman, or worse Musk, would be the arbiter of knowledge. It’s the information age analogue of the enclosure of common lands.

@lauren

Could you share why they think people don't use AI?

As you said, salient.

@lauren @cstross > The ultimate focus is obviously on one key aspect, what do the Big Tech Billionaires need to do to suck EVERYONE into using these horrible systems.

Making them cease to serve corposcum and abuse the user would be a start.

Considering that's the entire business model, I don't foresee that changing.
@lauren My workplace sent out a survey along the same lines on Tuesday. Then on Wednesday announced that, regretfully, the organization has run out of money and some carefully chosen staff would have to be let go.
@lauren I think the current strategy is basically integrate terrible GenAI in such a way is to make it impossible not to use it like Google's terrible summary thing or easy to accidentally use it like I don't know replacing your search bar with a thing that queries a LLM instead of actually submitting a search query
@lauren I had to spend half an hour figuring out how to disable Gemini on my mum's phone (after I changed the notification layout back to the way it was before) because she keeps triggering it by accident because Samsung replaced the power button with a b***** bixby button which now binds to Gemini by default
@lauren This plan to "suck EVERYONE into using these horrible systems" seems based on the classic consumer-culture playbook outlined well here: https://thereader.mitpress.mit.edu/a-brief-history-of-consumer-culture/
A Brief History of Consumer Culture

Over the course of the 20th century, capitalism preserved its momentum by molding the ordinary person into a consumer with an unquenchable thirst for more stuff.

The MIT Press Reader

@lauren An AI usage survey went around the office recently, and it was built in a way that I can only say was... pushy. It also gave a list of possible "reasons you aren't using AI yet" that was tragically incomplete and didn't offer any way to add other feedback.

Like, yeah, I am 'concerned about the environment' but 'produces contextless time-wasting garbage without institutional knowledge' and 'concerned it will rot my brain' were not possible options.

Sigh. I'll probably have to stick my neck out and actually write something to the people behind it.

@DarcMoughty Good luck. I hope your CV is up to date.
@lauren Christian Nationalists have been trying a similar question for a long time: how can we compel an entire country to accept Jesus or leave?