Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl

@glyph I think I disagree with almost every word in that post, but it's at least clear enough what I'm disagreeing with, which is refreshing?

I do think it's telling, though, that he describes one of the pillars of opposition to AI as he sees it as being an intellectual property argument and not a labor rights argument — in fairness, he does revisit labor rights later, but I still wouldn't have thought of IP issues in genAI as being moral, per se?

@glyph Mostly it's this part that strikes me as being something I deeply object to, and for three reasons.

We don't know the actual energy usage impact of AI, partly in thanks to corporate secrecy.

Whatever progress we've made in renewable energy, that doesn't change that many genAI companies are using non-renewable sources for training and inferencing energy (to wit, Musk in Memphis).

And finally, genAI eating up capacity means that progress in renewables has a reduced impact on energy use.

@xgranade @glyph This! Even if we knew the energy sources of all genAI products/services and somehow they “purchased” only from “clean” sources, the energy market is a huge network! Demand that somehow got all its use labeled as clean energy would just mean other demand for electricity has to be served by other sources. A majority of electrical production in the US is fossil gas and coal!
@r343l @xgranade this is sadly a point that the worst people in the world like to make, but, it is nevertheless true: money is fungible
@xgranade @glyph Basically my entire problem with the piece is not even where it lands so much as how it’s structured: it sets up straw man after straw man about why the various ethics arguments are bad (including asserting losing a job isn’t necessarily harmful so that argument doesn’t count!). It comes off as fundamentally condescending: we’re all naive unserious people. Only to undo it at the end by wrapping it up as a power relationships argument, which duh.
@r343l @xgranade I still feel kindly disposed to it for reasons I've already stated elsewhere several times, but yeesh, when you put it like that, the bar is really on the floor here isn't it. like … maybe you're right, maybe it is fair to call them "strawmen" but it's takes the critic position *so* much more seriously than most anti-anti writing that it felt like a breath of fresh air
@r343l @xgranade like to extend the analogy into absurdity a little bit, almost every other pro-AI piece I read just throws an old hat on the ground and sets it on fire, it feels like a sign of respect when someone actually goes to the trouble to gather some actual straw and stuff it into some clothes first
@glyph @r343l I think, irrespective of whether the post strawmans the anti-AI case or not, the post *does* make the pro-AI stance more clear, which makes arguments a bit more productive.
@glyph @xgranade Hahahaha. But yeah I agree it’s more serious than most. I just find it hard to stomach an argument that repeatedly includes things like <<“Not having a job” is not necessarily harmful>> which, uh, sure in some abstract sense that may be “true”, but throwing that out like that makes you come off as an asshole.
@glyph @xgranade I guess I am repeating myself and should go like do something actually fun with my time. 😂

@r343l @glyph @xgranade It's not even a coherent argument on his own terms! His whole ethical framework is basically that - his words - one is ethically obligated not to cause harm. Therefore unemployment only has to be harmful once for the job killer to be ethically bad. '*Not necessarily* harmful' has no bite; he's committing himself to doing no harm at all, ever.

Amusingly, the argument *could* work if he embraced utilitarianism.

@glyph This is an absolutely marvellous metaphor.