Question for people who choose not to use generative AI for ethical reasons: Do you make that choice despite accepting the growing evidence that it works (at least for some tasks, e.g. coding agents working on some kinds of software)? Or do you reject it because of the ethical problems *and* a belief that it doesn't actually work?

I'm thinking that principled rejection of generative AI might have to be the former kind, *despite* evidence that it works.

@matt many self reported claims of something working is not the same as evidence of it working.

AI research is (obviously, the field is evolving fast) lagging far behind. Even ignoring external harms, it's unclear whether it causes personal long term harm and what it means for the maintainability of projects.

It's entirely possible that it causes *important* skill atrophy (every new tool causes skill atrophy of some kind, many are often irrelevant). Of course last year's studies on this topic doesn't apply to the current models. Just like studies done on these models won't apply to next year's models. But patterns are appearing.

It's also possible that large projects where everyone uses AI heavily won't have anyone understand the details anymore, only the broad designs. Of course an AI can always explain those to you or you just regenerate them with a bigger model in the future.

But all that said... If we had a performance enhancing drug that allowed ppl to be x times more productive, would we really be that uncareful with it just because it's chemical makeup hasn't been restricted yet?

GenAI is basically a nonchemical drug (listen to the LLM maximalists, not me), and I am worried about heavy users, I am worried about companies forcing employees to use them everywhere, I am worried about ppl getting addicted to them and frying their brains (burnout as a service)

And at this point we haven't even talked about the damage all the mis-use (be it malicious or ignorant) has caused or the training causes to the world and to ppl in the training mines.

And every time I see some new science about AI it's making me more and more worried about the heavy users of it

https://thingy.social/@malcircuit/116290027307902048

Mallory's Musings & Mischief (@[email protected])

How many studies do researchers need to do before the threat of LLMs is taken seriously? This technology *might* have some useful niche applications, but widespread deployment will be a disaster for humanity. This shit is an existential hazard, and not in the way the AI companies love to talk about. It's not going to take over the world like Skynet, it's a cognitohazard that turns anyone that interacts with it into an idiot. https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them

Mastodon