Question for people who choose not to use generative AI for ethical reasons: Do you make that choice despite accepting the growing evidence that it works (at least for some tasks, e.g. coding agents working on some kinds of software)? Or do you reject it because of the ethical problems *and* a belief that it doesn't actually work?

I'm thinking that principled rejection of generative AI might have to be the former kind, *despite* evidence that it works.

@matt this decision is ongoing, but personally I think about it with a matrix like this one:

- risk: what harms might I personally face?
- reward: what benefits might I personally accrue from using it?
- externalities: what harm might I be doing to others as a result of using it?
- systems: what harms (or benefits!) might develop as a result of *everyone* using this tech in the way that I am?

@matt right now the evaluation I have of that matrix is:

- risk: AI psychosis, huge amounts of wasted time, skill loss, social credibility loss, gambling addiction, dependence upon technology that will increase rapidly in price very soon
- reward: maybe it could help me write some code a little bit faster? evidence is very weak even if sentiment is strong here
- externalities: water use, power use, plagiarism, spamming others with low-quality work
- systems: model collapse, financial collapse