I just read through a new, detailed, seemingly statistical rigorous academic study researching why people don't use AI. Its conclusions on this question are not unreasonable, and while I don't agree with all of them, there are salient points.

I am not going to give this study any direct oxygen by linking to it or naming it specifically. Because in my personal view, its entire purpose is nightmarish.

The ultimate focus is obviously on one key aspect, what do the Big Tech Billionaires need to do to suck EVERYONE into using these horrible systems. The entire study -- again, in my opinion -- is an effort to create a roadmap for Big Tech to "adjust" what they're doing to undermine the concerns non-users of the tech have, and seduce them into becoming addicted users instead.

There was no sense that not using generative AI is a valid choice, rather it's seen as an aberration that needs to be eliminated.

I found the entire study to be both interesting and disgusting.

L

@lauren
May I ask you to reconsider, not sharing this study? The people who work hard to suck people into Gen AI will read it anyway, passing it along by word of mouth. And it's difficult to build social defenses if you don't know what to look out for and be wary of.
@datenwolf I don't think there's anything in there of positive value in terms of defending against this. It is essentially is a roadmap to undermining the concerns that most of us already know.
@lauren
Well that much I understood. Which is why I want to read it, so that I know the roadmap – in order to know where to place the roadblocks or maybe even turn parts of the path into a moat proactively.
@datenwolf @lauren I’m interested too, because my work is pushing AI everywhere and it’s annoying. Were quality assurance professionals for gods’ sake. How is it useful if I have to 100% verify everything? What does this save me?

@lauren @datenwolf thanks! I hate it!

And I think you were right, not a ton useful here to share! I asked but you were right! It looks like a big survey to find out what seemed fairly obvious. “Addressing concerns regarding output quality, ethics, and human connection can significantly enhance the effective use of these technologies” - wow no kidding, more people might use it if it weren’t 1000% plagiarism?

I personally don’t care for it at work because while it’s fast maybe, you can’t trust output but also deskilling is a valid concern. If my junior team members use chatGPT to write things, how will they learn to write well? And edit? And critique?

I also think it’s watering down everything. If it all becomes averages of averages of averages, everything we write and read will sound the same and repetitive and content-neutral. I want to see new thoughts and nice ways of presenting them, not complete uniformity.

I am TERRIFIED that people think they should use AI for decision-making. What a bad idea. Yet in a work meeting our leader asked what would you like an AI for and a peer said to decide xxx thing that is a very hard decision and influenced by a ton of things. While maybe it could error check, compare vs regs, there is No Way it can make decisions.

Thanks for sharing.

@3janeTA @datenwolf You're welcome. And I DID warn you!
@lauren @3janeTA Thank you for sharing this link. However, I think there's a couple of important things to take away if you "read between the lines" so to speak. First of all: Why are the two areas of main concern (connectedness and ethics) the only two with ridiculous low p-values? p<0.001 in a study with ~200 people would mean, they'd unanimously agree on that, in both measures. Then the wording: Heading was "concerns", but in the text they used "fear" and "anxiety" thereby pathologizing. 1/

@lauren @3janeTA
Then there's the conscious decision not to include effects on spirituality in the analysis. I wonder why (maybe because in the past few months this aspect of GenAI went completely off the rails and causes some actual problems).

Furthermore, this paper is part of a conference proceedings volume, which means there was an associated oral presentation, or at lest a poster. Did this spur discussions?

2/

@lauren @3janeTA

The scenario that immediately came to my mind would be AI assistants to customer facing developers, nudging their work toward the goals of the organization. Think "developer tasked with implementing a dark pattern, having ethical concerns in actually going through with it." – There's a strong incentive to optimize away those hurdles.