Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl

@glyph I think I disagree with almost every word in that post, but it's at least clear enough what I'm disagreeing with, which is refreshing?

I do think it's telling, though, that he describes one of the pillars of opposition to AI as he sees it as being an intellectual property argument and not a labor rights argument — in fairness, he does revisit labor rights later, but I still wouldn't have thought of IP issues in genAI as being moral, per se?

@glyph Mostly it's this part that strikes me as being something I deeply object to, and for three reasons.

We don't know the actual energy usage impact of AI, partly in thanks to corporate secrecy.

Whatever progress we've made in renewable energy, that doesn't change that many genAI companies are using non-renewable sources for training and inferencing energy (to wit, Musk in Memphis).

And finally, genAI eating up capacity means that progress in renewables has a reduced impact on energy use.

@xgranade @glyph Knowing the author outside of that post a bit, I would not consider them pro-AI, but that said, I do disagree with their analysis of the environmental aspect, at the very least. I think it brushes it aside by offloading it to the "power" aspect (in a form of rhetorical irony) while ignoring what is actually happening.

I think the post also ignores the harms done to labour, including those who are recruited at low wages to filter out CSAM and other filth from the training data.

@gwozniak @xgranade in fairness to the author, I think that this starts to get into the Reality Is Gish Galloping You problem with writing about this topic: getting one's arms around the whole of the ethical problems is incredibly difficult. For example, popular writing about the power issues has rarely touched on the fact that you *can't* use renewables for these things, and in fact I don't know of a citation I can easily drop in to explain *why* Musk was running so many methane generators
@gwozniak @xgranade this affects both sides; on the pro-AI side, there's "did you consider the power plant runoff problem", "what about coolant contamination", "what about *regulatory* incentives have placed the DCs in bad spots, instead of DCs abstractly". on the anti-AI side you've got the fact that in the time it takes to research a post, nine new models came out now, local models are actually good, did you know you can use qwen for coding, there's an ethically trained one now too
@gwozniak @xgranade anyway none of this is a reason to consider that stuff *right* but it is a reason to strive our best to be patient and kind as we slog our way through this discourse, because we're probably wrong about a bunch of specific details too, and it's just SO hard to get through ENOUGH data to come to a useful conclusion, that someone actually putting in the work to analyze the ethics and not handwave them away deserves a lot of credit

@glyph @gwozniak @xgranade

I keep coming back to the fact that these problems with dirty power vs. renewables were well-known and understood but the broligarchs rushed ahead and the tech giants started building data centers right now any way.

They didn't have to. They could have planned it out and been mindful of the environmental impact. They could have. But they didn't, because money.

They manufactured a way to accelerate global warming instead of taking it slow to do it with the least possible impact. Nobody was pushing for this other than investors. They had a product looking for a solution. They wanted to flood everything everywhere as quickly as possible to be the first to find the niche where it fits so they could set the rules and monetize it. That was the only consideration. They fired or pushed out anyone who didn't agree.

They did this on purpose.

@glyph @gwozniak I'd tend to agree, but he kind of presented the anti-AI argument as though it was comprehensive, which sort of invites the "what about my favorite argument" kind of problem.

@glyph @gwozniak @xgranade

you *can't* use renewables for these things

slightly OOTL but I assume this amounts to renewables being more intermittent than "fuel goes in, power goes out"?

@cxberger @gwozniak @xgranade more or less. I am not an expert here and as I said, no citation, but my vague understanding is that particularly when you are doing *training*, you need to be able to fire up an entire hojillowatt at once, run every one of those computers full-tilt all at the exact same time, then shut them all down really fast. this presents challenges at every layer of the fabric
@cxberger @gwozniak @xgranade most computers in general, and datacenters are no exception, have all these assumptions baked in about amortization. like sure you've got more-power and less-power times, but everything ramps up and ramps down stochastically separated by time, you don't have to be able to reboot the entire DC at once, you can phase things and do smart scheduling and defer delivering power and whatnot
@cxberger @gwozniak @xgranade both the hardware nature of GPUs and the software nature of AI workload challenge all that stuff, and create these big intense swings that can't really be buffered or managed, you just have to cope with all eleventy gazillion things turning on all at once, running white-hot for days, then shutting off all at once, which is a nightmare for networking gear, for power distribution gear, for the grid, etc
@cxberger @gwozniak @xgranade typical renewables workloads involving batteries are inherently based on these types of assumptions. there's maximum rates that the batteries can charge, and maximum rates the batteries can discharge. Renewables can give you kind of arbitrarily large amounts of power and at some scale could probably even satisfy the raw numbers that the DC wants in terms of KWh, but in terms of W… getting the power out of all those batteries *instantly* adds a layer of challenge
@xgranade @glyph almost goes without saying that the "if" there is holding the weight of the world, especially as some talking heads call for more natural gas production/plants to power all this stuff (or just, y'know, look at memphis)
@xgranade @glyph though perhaps this is wading into the same "confusion" (I am not sure if the two can rightly be decoupled in the world we actually live in) between mere "generative models" (which are... just a weights file, I guess?) and the "AI industry" which is actually demanding all of this
@xgranade @glyph I've seen this exact same argument made in reference to cryptocurrency and it was just as disingenuous then. Construction of renewables has its own costs/pollution associated with it even if operation is clean so it's better overall to not increase demand while replacing existing capacity with renewables.
@xgranade @glyph There is an "If" in there that is doing a lot of heavy lifting.
@xgranade @glyph This! Even if we knew the energy sources of all genAI products/services and somehow they “purchased” only from “clean” sources, the energy market is a huge network! Demand that somehow got all its use labeled as clean energy would just mean other demand for electricity has to be served by other sources. A majority of electrical production in the US is fossil gas and coal!
@r343l @xgranade this is sadly a point that the worst people in the world like to make, but, it is nevertheless true: money is fungible
@xgranade @glyph Basically my entire problem with the piece is not even where it lands so much as how it’s structured: it sets up straw man after straw man about why the various ethics arguments are bad (including asserting losing a job isn’t necessarily harmful so that argument doesn’t count!). It comes off as fundamentally condescending: we’re all naive unserious people. Only to undo it at the end by wrapping it up as a power relationships argument, which duh.
@r343l @xgranade I still feel kindly disposed to it for reasons I've already stated elsewhere several times, but yeesh, when you put it like that, the bar is really on the floor here isn't it. like … maybe you're right, maybe it is fair to call them "strawmen" but it's takes the critic position *so* much more seriously than most anti-anti writing that it felt like a breath of fresh air
@r343l @xgranade like to extend the analogy into absurdity a little bit, almost every other pro-AI piece I read just throws an old hat on the ground and sets it on fire, it feels like a sign of respect when someone actually goes to the trouble to gather some actual straw and stuff it into some clothes first
@glyph @r343l I think, irrespective of whether the post strawmans the anti-AI case or not, the post *does* make the pro-AI stance more clear, which makes arguments a bit more productive.
@glyph @xgranade Hahahaha. But yeah I agree it’s more serious than most. I just find it hard to stomach an argument that repeatedly includes things like <<“Not having a job” is not necessarily harmful>> which, uh, sure in some abstract sense that may be “true”, but throwing that out like that makes you come off as an asshole.
@glyph @xgranade I guess I am repeating myself and should go like do something actually fun with my time. 😂

@r343l @glyph @xgranade It's not even a coherent argument on his own terms! His whole ethical framework is basically that - his words - one is ethically obligated not to cause harm. Therefore unemployment only has to be harmful once for the job killer to be ethically bad. '*Not necessarily* harmful' has no bite; he's committing himself to doing no harm at all, ever.

Amusingly, the argument *could* work if he embraced utilitarianism.

@glyph This is an absolutely marvellous metaphor.
@xgranade @glyph Thanks for sharing the blog post. I'm also not able to agree with everything, since my ethical framework seems to differ. One thing I totally agree with is this:
“... Every line of every artifact, if the user cannot stand by it, cannot justify a design decision, cannot explain something, then they, not the ‘AI’, have failed.”
Nonetheless the fact that the author doesn't mentioned the thousands of abused ghostworkers annotating the material for those models is troublesome for me.
@xgranade @glyph Same feels exactly.
@cthos @glyph Kinda feels like a weird form of seatbelt thinking... now that there's better renewables, we can afford to waste more energy, which is not how you fight climate change.
@xgranade @glyph He starts to lose me in the second paragraph: 'if you look into any one of the arguments, the details are (shocking) a little more complicated'. He implies here that the other side doesn't engage with the detail and nuance but he does. This does not seem to set the stage for good-faith engagement. 1/3
@xgranade @glyph The ethical framework he sets out is unsophisticated. He condemns utilitarianism (the claim that this is 'the dominant ethics pretty much everywhere' is... well, a claim) and consequentialism, but his own ethical framework is itself consequentialist. 2/3
@xgranade @glyph Importantly, it entails that there can never be any ethical obligation to act for the good: the only obligation is to avoid doing bad. It is perfectly ethical to passively allow the world to slide into a worse state. Much of the rest of what he writes can be rejected on this basis alone. 3/3