Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl

@glyph I think I disagree with almost every word in that post, but it's at least clear enough what I'm disagreeing with, which is refreshing?

I do think it's telling, though, that he describes one of the pillars of opposition to AI as he sees it as being an intellectual property argument and not a labor rights argument — in fairness, he does revisit labor rights later, but I still wouldn't have thought of IP issues in genAI as being moral, per se?

@glyph Mostly it's this part that strikes me as being something I deeply object to, and for three reasons.

We don't know the actual energy usage impact of AI, partly in thanks to corporate secrecy.

Whatever progress we've made in renewable energy, that doesn't change that many genAI companies are using non-renewable sources for training and inferencing energy (to wit, Musk in Memphis).

And finally, genAI eating up capacity means that progress in renewables has a reduced impact on energy use.

@xgranade @glyph Knowing the author outside of that post a bit, I would not consider them pro-AI, but that said, I do disagree with their analysis of the environmental aspect, at the very least. I think it brushes it aside by offloading it to the "power" aspect (in a form of rhetorical irony) while ignoring what is actually happening.

I think the post also ignores the harms done to labour, including those who are recruited at low wages to filter out CSAM and other filth from the training data.

@gwozniak @xgranade in fairness to the author, I think that this starts to get into the Reality Is Gish Galloping You problem with writing about this topic: getting one's arms around the whole of the ethical problems is incredibly difficult. For example, popular writing about the power issues has rarely touched on the fact that you *can't* use renewables for these things, and in fact I don't know of a citation I can easily drop in to explain *why* Musk was running so many methane generators
@gwozniak @xgranade this affects both sides; on the pro-AI side, there's "did you consider the power plant runoff problem", "what about coolant contamination", "what about *regulatory* incentives have placed the DCs in bad spots, instead of DCs abstractly". on the anti-AI side you've got the fact that in the time it takes to research a post, nine new models came out now, local models are actually good, did you know you can use qwen for coding, there's an ethically trained one now too
@gwozniak @xgranade anyway none of this is a reason to consider that stuff *right* but it is a reason to strive our best to be patient and kind as we slog our way through this discourse, because we're probably wrong about a bunch of specific details too, and it's just SO hard to get through ENOUGH data to come to a useful conclusion, that someone actually putting in the work to analyze the ethics and not handwave them away deserves a lot of credit

@glyph @gwozniak @xgranade

I keep coming back to the fact that these problems with dirty power vs. renewables were well-known and understood but the broligarchs rushed ahead and the tech giants started building data centers right now any way.

They didn't have to. They could have planned it out and been mindful of the environmental impact. They could have. But they didn't, because money.

They manufactured a way to accelerate global warming instead of taking it slow to do it with the least possible impact. Nobody was pushing for this other than investors. They had a product looking for a solution. They wanted to flood everything everywhere as quickly as possible to be the first to find the niche where it fits so they could set the rules and monetize it. That was the only consideration. They fired or pushed out anyone who didn't agree.

They did this on purpose.