Question for people who choose not to use generative AI for ethical reasons: Do you make that choice despite accepting the growing evidence that it works (at least for some tasks, e.g. coding agents working on some kinds of software)? Or do you reject it because of the ethical problems *and* a belief that it doesn't actually work?

I'm thinking that principled rejection of generative AI might have to be the former kind, *despite* evidence that it works.

@matt I reject it for ethical reasons, the same way I avoid shopping with Amazon.com for ethical reasons more than any pragmatic reason.

Is Amazon often the cheapest, fastest way for me to acquire a thing? Yes. Is it also a terrible company? Yes. If I can acquire something in another way, I look elsewhere. (I am not perfect about this.)

Even if you could show me evidence that, somehow, generative AI produced 100% accurate code or text, I'd still be against using it on ethical and social grounds.

Do I use some LLM-driven software? Yes. I use some local models for transcription. Do I double-check the results? Also yes, because I cannot be 100% sure its results are accurate, and I don't want a fabricated quote winding up in an LWN article.

Might I use LLM-driven stuff at some point for grammar checking? I already use LanguageTool, so ... maybe?

But purely generative AI stuff... I have too many ethical, social, etc. qualms against it to make it part of my work even if I was confident it was 100% accurate. (This is not all, strictly speaking, ethical - I also have qualms about its impact on FOSS development from many other angles, such as increasing the velocity of PRs and putting maintainers under even more stress.)

I also, currently, reject it partially out of spite/stubborness -- there is far too much "peer pressure" and pushing to accept it. I cannot claim this is a logical stance, but when this much money is being spent to push something, I feel like somebody should be pushing the other direction. I'm just dumb enough to be that somebody.