Question for people who choose not to use generative AI for ethical reasons: Do you make that choice despite accepting the growing evidence that it works (at least for some tasks, e.g. coding agents working on some kinds of software)? Or do you reject it because of the ethical problems *and* a belief that it doesn't actually work?

I'm thinking that principled rejection of generative AI might have to be the former kind, *despite* evidence that it works.

@matt this decision is ongoing, but personally I think about it with a matrix like this one:

- risk: what harms might I personally face?
- reward: what benefits might I personally accrue from using it?
- externalities: what harm might I be doing to others as a result of using it?
- systems: what harms (or benefits!) might develop as a result of *everyone* using this tech in the way that I am?

@glyph That's a reasonable way to look at it. I think it's easier to argue against using these things if one can point to significant, demonstrable personal risks. If the rewards are stacked and the only counter-arguments are externalities and systems, then abstinence is a much harder sell.
@matt the big problem with the personal risks right now is the lack of any credible safety story from the model vendors. as far as I can tell, we don't *know* what causes AI psychosis. there's some vague correlation with "sycophancy" and maybe they've figured out how to turn that down, but maybe not? we don't know how much skill loss is real. we don't have demonstrated best practices in place.
@matt like, I think that people are far too nervous about nuclear technology because we actually know how that stuff works, we know how to measure dosage and harm and risk, and "ooh, spooky nuclear" is a vibe and not a risk calculation. but the opposite is true for AI systems. we're seeing these wildly dangerous outcomes and then people kinda yadda-yadda-yadda over "best practices" without ever saying what those practices are or providing validated evidence to support them
@matt maybe the risk is very low! but if a guy uses a model to help summarize actuarial tables, goes crazy and starts calling himself a Star Child, and the response from the vendor of the product that arguably did this to him is "well, he probably had some family history of schizophrenia or something" when he was over 40 (WAY past the age where a disease like that generally presents) and that family history also doesn't exist, well, it's concerning that they still want me to use it
@glyph Wait, did that actually happen? I mean, the guy calling himself a star child?
@glyph It's easy to get the message, not only from boosters but also from reluctant users like Nolan (whom I boosted and posted about elsewhere on the thread), that the rewards are so great and undeniable that one would have to be a saint to not use the thing just because of the externalities, and, you know, none of us are saints like that when it comes to other problematic things.
@matt that is a vague pastiche of a few different stories since I can’t check sources right now but it’s not too far off the mark. this comes from an outline of a post I am writing and I have like … a thousand citations to keep track of
@matt as far as the benefits … I know that it is making people feel high, and *maybe* the latest Claude models specifically are just so much better at software than any of the previous six times that somebody said that “this is it!!!”, but we still haven’t seen any real hard evidence in the form of monetary ROI. the one company we know is leaning hard into LLMs for everything, Microsoft, seems to be having a historic number of bugs and outages
@glyph I would love it if Nolan, and no doubt other developers as well, could have the relief of finding out that what seemed to be the "terrifying" effectiveness of current coding agents was in fact an illusion, with the risks (ideally to both the developer and the business/project) outweighing the rewards.
@glyph And then I could continue to not use coding agents without the FOMO.
@matt I wish I could provide that, but my only real insight is that a ratio of benefits to costs *exists*, and that we are structurally disadvantaged in evaluating its denominator, while boosters either totally ignore or, at best, wildly underestimate it. but I don’t know what it is, and it’s very expensive to measure even without those cognitive, economic, and social impediments to getting an accurate value for either number.