Question for people who choose not to use generative AI for ethical reasons: Do you make that choice despite accepting the growing evidence that it works (at least for some tasks, e.g. coding agents working on some kinds of software)? Or do you reject it because of the ethical problems *and* a belief that it doesn't actually work?

I'm thinking that principled rejection of generative AI might have to be the former kind, *despite* evidence that it works.

Thanks to everyone who has responded so far.

To focus on a specific definition of "it works", take this post that I boosted:

https://toot.cafe/@nolan/116185451572229163

He has seen coding agents fix bugs with minimal prompting, and it's effective enough that he finds it terrifying. What should we make of that? He's ambivalent, but clearly feels that we should take seriously the demonstrated abilities of these tools, and as a result, he's using them, but not happily. I'm trying to figure out what to do with that.

Nolan Lawson (@[email protected])

I think what a lot of AI critics are missing is that they're judging an LLM by its first draft. This is *not* what terrifies me about these machines. What terrifies me is that you can ask them "find bugs in this PR." Or "find performance flaws." Or really anything. Then have 3 agents (with different models ideally) vote on the result. Then have another fix it. Repeat until all bugs are clean. If you haven't tried this experiment then you haven't reached the dark night of the soul that I have.

Toot Café