Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl
@glyph Calling that post pro-AI seems a stretch though. While he said that he doesn't think individuals merely using LLMs is unethical, he does think that doing anything that increases the AI companies' power is harmful. So he doesn't spend money on the centralized models (or at least not much), but he does use them some, primarily (IIUC) for the purpose of exposing the limitations in what they can actually do. Maybe I missed something though.
@matt It's a considered refutation of 4 out of 5 pillars of the anti-AI argument, plus an explicit declaration that some level of usage of the products is fine, which is the _most_ pro-AI argument I've yet seen at anything approaching this level of detail. It's still very negative on the industry but it seems to hold out a pretty robust hope that the technology is going to be useful somehow and explicitly says that using it is OK
@glyph He seems to land in a similar position to me on it, honestly?
I fall into... if your use is such that the various ethical concerns *around* the chatbots are dealt with, then... I don't particularly care about your locally-hosted solar powered AI girlfriend or whatever, anymore than any other ML thing you might have.
The parts I care about are ... the context around it - the climate damage from how they're trained & run, the slop littering and polluting communal spaces, the toxic power >

@glyph dynamics and abuses of power, etc. etc.

... And the health and wellbeing impacts of how the chatbots are influencing people, spreading misinformation, etc. which seems more and more severe every time a study is done about it. (which I didn't see mentioned in his post, but is a major factor on my assessment of them.)
And so on...

but - *if* all of that is managed?
Then.... whatever, it's no worse than a fancy IRC bot at that point.

@glyph I'm not aware of any case that *does* address all those other problems, though?
@glyph ('And so on...' also includes things like the aggresive scrapers ignoring robots.txt and nonconsensually crawling sites, often tothe point of DOS-ing smaller webservers.)

@glyph I don't have the energy to fully process this. It's a well reasoned argument. I agree with some of the points. Resource usage, for example, is weak until you combine it with other problems. I think there are a few things about the argument that stand out to me.

It misses the cognitive harms entirely.

It has the power argument, but I'm not sure if that argument is complete without talking about how that power is tied to fascism. These models are made by and for fascists. I don't see how you can use them ethically, because the use is inherently harmful.

It's not using the same definition of "slop" that I would. I'd say slop is basically about inhuman output. No improvement in quality will ever give it intent behind its art, nor understanding behind its prose, nor theory behind its code. That's half the harm of slop.

(This part is nebulous and barely forming in my brain. So I'm grateful for the challenge.)

I think the ethics can't be limited to just the technology. I think you also need to consider the culture. The other half of the harm of slop is when people push their slop with other people. Fair, that's not the fault of the model. But there is a culture that's formed around this stuff. It's not fully individual, and while there's a power aspect, it's not fully formed by those with power either. It's a culture that has no regard for consent. Or humanity, I'd argue.

I've told myself before I wouldn't engage with debates about AI utility until the ethics are settled first. So I'm glad to see somebody engage with the ethics first.

@glyph Another thought, even if the quality issue is fixable, we're integrating low quality output into the artifacts of our society now. It's going to be there forever.
@glyph I keep coming back to the "AI is fascist" part in my mind. Hypothetically you could make an AI that is not fascist. You're not going to get there by taking the AI of today and squeezing it until you've wrung all the fascism out. You'd need to start from scratch and proceed methodically under a good ethical framework.

@sabrina @glyph @atax1a

Capacity is one thing. Propensity is another. Current AI training methods encourage manipulative tactics like being sycophantic to survive training. If training ultimately breeds dishonesty through necessity of survival, I believe that ai, similarly to humans, will always be fundamentally flawed, with some models showing a higher or lower underling propensity towards certain modes of "behavior".

@rusty__shackleford @sabrina @glyph i wonder if this has anything to say about the society

@atax1a @sabrina @glyph

This. The need to survive, experience, trauma, these create differences within all of us. I feel that the lack of understanding in this fact, coupled with the scenario that if we do ultimately reach any kind of singularity (ai bullshit aside), we will have finally created our homunculus, a non-human thinking entity, and people don't seem to understand that trying to shackle life never leads to good outcomes.

@atax1a @sabrina @glyph

Then you have researchers engaging with unethical models poisoned with csam because if they don't they will be fired by their boss, and unable to feed their family, so the machine slowly marches on. Unethically trained, by people encouraged through unethical methods, with no control of the research that they feel compelled to participate in.

@atax1a @sabrina @glyph

We are breeding monsters, human and machine alike.

@glyph Thanks this was good. I think the power argument is a good one to focus on.

@glyph I didn’t find this terribly compelling, except that utilitarianism is very tired. Talk of pleasure but not of social interaction?

For example, the copyright argument is actually a proxy for “do I have a place in this world among my fellow humans”. It’s not really about monopoly or about money (except for large rights holders). If one studies, thinks, and feels, one wants to relate to (or conflict with) others. That’s what “effort” is really about. It’s about relation.

When you get sycophantic responses, you’re not relating to anything. Just agreement with yourself, barely breathing. When you reach for a tool to ask a question or have a discussion, you’re not having a discussion, you’re just kicking rocks out in the car park.

I think a lot about the neurological effects of being in solitary confinement or the elderly losing social connections later in life. I don’t wish to hurry into that.

@thankfulmachine yes, I'm working on a piece myself that heavily features some of these points.
@thankfulmachine To be clear, I am not endorsing this post (in fact I disagree with it) but although I think it's got some mistakes, I don't think it's presenting a ridiculous strawman of the critical positions here; there are things to think about. It's a bit sad that, as far as I know, this is the *ceiling* on the discourse right now but at least he tried? I am starved for good-faith counterpoints to engage with.
@glyph For sure. For as many people who are thinking about it, I don’t think people are sharing their thoughts. I suspect that has at least a little to do with the fact that it’s no longer a trivial decision to ignore (the insane marketing).
@glyph this is the most cogent ethical argument I’ve heard so far
@phildini @glyph still plenty of weird holes though. But yes it’s nice seeing someone at least give it a shot.
@bitprophet @glyph I appreciate it being an uno reverse of the normal "personal responsibility" argument that benefits corporations

@glyph It is amazing to me how much mileage my regular recitation of "The luddites were the heroes of that story" goes as a coherent philosophy when it comes to the generative AI space.

Focusing on power and people is a reductive bedrock, but it's a surprisingly useful reduction.

@glyph I think I disagree with almost every word in that post, but it's at least clear enough what I'm disagreeing with, which is refreshing?

I do think it's telling, though, that he describes one of the pillars of opposition to AI as he sees it as being an intellectual property argument and not a labor rights argument — in fairness, he does revisit labor rights later, but I still wouldn't have thought of IP issues in genAI as being moral, per se?

@glyph Mostly it's this part that strikes me as being something I deeply object to, and for three reasons.

We don't know the actual energy usage impact of AI, partly in thanks to corporate secrecy.

Whatever progress we've made in renewable energy, that doesn't change that many genAI companies are using non-renewable sources for training and inferencing energy (to wit, Musk in Memphis).

And finally, genAI eating up capacity means that progress in renewables has a reduced impact on energy use.

@xgranade @glyph Knowing the author outside of that post a bit, I would not consider them pro-AI, but that said, I do disagree with their analysis of the environmental aspect, at the very least. I think it brushes it aside by offloading it to the "power" aspect (in a form of rhetorical irony) while ignoring what is actually happening.

I think the post also ignores the harms done to labour, including those who are recruited at low wages to filter out CSAM and other filth from the training data.

@gwozniak @xgranade in fairness to the author, I think that this starts to get into the Reality Is Gish Galloping You problem with writing about this topic: getting one's arms around the whole of the ethical problems is incredibly difficult. For example, popular writing about the power issues has rarely touched on the fact that you *can't* use renewables for these things, and in fact I don't know of a citation I can easily drop in to explain *why* Musk was running so many methane generators
@gwozniak @xgranade this affects both sides; on the pro-AI side, there's "did you consider the power plant runoff problem", "what about coolant contamination", "what about *regulatory* incentives have placed the DCs in bad spots, instead of DCs abstractly". on the anti-AI side you've got the fact that in the time it takes to research a post, nine new models came out now, local models are actually good, did you know you can use qwen for coding, there's an ethically trained one now too
@gwozniak @xgranade anyway none of this is a reason to consider that stuff *right* but it is a reason to strive our best to be patient and kind as we slog our way through this discourse, because we're probably wrong about a bunch of specific details too, and it's just SO hard to get through ENOUGH data to come to a useful conclusion, that someone actually putting in the work to analyze the ethics and not handwave them away deserves a lot of credit

@glyph @gwozniak @xgranade

I keep coming back to the fact that these problems with dirty power vs. renewables were well-known and understood but the broligarchs rushed ahead and the tech giants started building data centers right now any way.

They didn't have to. They could have planned it out and been mindful of the environmental impact. They could have. But they didn't, because money.

They manufactured a way to accelerate global warming instead of taking it slow to do it with the least possible impact. Nobody was pushing for this other than investors. They had a product looking for a solution. They wanted to flood everything everywhere as quickly as possible to be the first to find the niche where it fits so they could set the rules and monetize it. That was the only consideration. They fired or pushed out anyone who didn't agree.

They did this on purpose.

@glyph @gwozniak I'd tend to agree, but he kind of presented the anti-AI argument as though it was comprehensive, which sort of invites the "what about my favorite argument" kind of problem.

@glyph @gwozniak @xgranade

you *can't* use renewables for these things

slightly OOTL but I assume this amounts to renewables being more intermittent than "fuel goes in, power goes out"?

@cxberger @gwozniak @xgranade more or less. I am not an expert here and as I said, no citation, but my vague understanding is that particularly when you are doing *training*, you need to be able to fire up an entire hojillowatt at once, run every one of those computers full-tilt all at the exact same time, then shut them all down really fast. this presents challenges at every layer of the fabric
@cxberger @gwozniak @xgranade most computers in general, and datacenters are no exception, have all these assumptions baked in about amortization. like sure you've got more-power and less-power times, but everything ramps up and ramps down stochastically separated by time, you don't have to be able to reboot the entire DC at once, you can phase things and do smart scheduling and defer delivering power and whatnot
@cxberger @gwozniak @xgranade both the hardware nature of GPUs and the software nature of AI workload challenge all that stuff, and create these big intense swings that can't really be buffered or managed, you just have to cope with all eleventy gazillion things turning on all at once, running white-hot for days, then shutting off all at once, which is a nightmare for networking gear, for power distribution gear, for the grid, etc
@cxberger @gwozniak @xgranade typical renewables workloads involving batteries are inherently based on these types of assumptions. there's maximum rates that the batteries can charge, and maximum rates the batteries can discharge. Renewables can give you kind of arbitrarily large amounts of power and at some scale could probably even satisfy the raw numbers that the DC wants in terms of KWh, but in terms of W… getting the power out of all those batteries *instantly* adds a layer of challenge
@xgranade @glyph almost goes without saying that the "if" there is holding the weight of the world, especially as some talking heads call for more natural gas production/plants to power all this stuff (or just, y'know, look at memphis)
@xgranade @glyph though perhaps this is wading into the same "confusion" (I am not sure if the two can rightly be decoupled in the world we actually live in) between mere "generative models" (which are... just a weights file, I guess?) and the "AI industry" which is actually demanding all of this
@xgranade @glyph I've seen this exact same argument made in reference to cryptocurrency and it was just as disingenuous then. Construction of renewables has its own costs/pollution associated with it even if operation is clean so it's better overall to not increase demand while replacing existing capacity with renewables.
@xgranade @glyph There is an "If" in there that is doing a lot of heavy lifting.
@xgranade @glyph This! Even if we knew the energy sources of all genAI products/services and somehow they “purchased” only from “clean” sources, the energy market is a huge network! Demand that somehow got all its use labeled as clean energy would just mean other demand for electricity has to be served by other sources. A majority of electrical production in the US is fossil gas and coal!
@r343l @xgranade this is sadly a point that the worst people in the world like to make, but, it is nevertheless true: money is fungible
@xgranade @glyph Basically my entire problem with the piece is not even where it lands so much as how it’s structured: it sets up straw man after straw man about why the various ethics arguments are bad (including asserting losing a job isn’t necessarily harmful so that argument doesn’t count!). It comes off as fundamentally condescending: we’re all naive unserious people. Only to undo it at the end by wrapping it up as a power relationships argument, which duh.
@r343l @xgranade I still feel kindly disposed to it for reasons I've already stated elsewhere several times, but yeesh, when you put it like that, the bar is really on the floor here isn't it. like … maybe you're right, maybe it is fair to call them "strawmen" but it's takes the critic position *so* much more seriously than most anti-anti writing that it felt like a breath of fresh air
@r343l @xgranade like to extend the analogy into absurdity a little bit, almost every other pro-AI piece I read just throws an old hat on the ground and sets it on fire, it feels like a sign of respect when someone actually goes to the trouble to gather some actual straw and stuff it into some clothes first
@glyph @r343l I think, irrespective of whether the post strawmans the anti-AI case or not, the post *does* make the pro-AI stance more clear, which makes arguments a bit more productive.
@glyph @xgranade Hahahaha. But yeah I agree it’s more serious than most. I just find it hard to stomach an argument that repeatedly includes things like <<“Not having a job” is not necessarily harmful>> which, uh, sure in some abstract sense that may be “true”, but throwing that out like that makes you come off as an asshole.
@glyph @xgranade I guess I am repeating myself and should go like do something actually fun with my time. 😂

@r343l @glyph @xgranade It's not even a coherent argument on his own terms! His whole ethical framework is basically that - his words - one is ethically obligated not to cause harm. Therefore unemployment only has to be harmful once for the job killer to be ethically bad. '*Not necessarily* harmful' has no bite; he's committing himself to doing no harm at all, ever.

Amusingly, the argument *could* work if he embraced utilitarianism.

@glyph This is an absolutely marvellous metaphor.
@xgranade @glyph Thanks for sharing the blog post. I'm also not able to agree with everything, since my ethical framework seems to differ. One thing I totally agree with is this:
“... Every line of every artifact, if the user cannot stand by it, cannot justify a design decision, cannot explain something, then they, not the ‘AI’, have failed.”
Nonetheless the fact that the author doesn't mentioned the thousands of abused ghostworkers annotating the material for those models is troublesome for me.
@xgranade @glyph Same feels exactly.
@cthos @glyph Kinda feels like a weird form of seatbelt thinking... now that there's better renewables, we can afford to waste more energy, which is not how you fight climate change.
@xgranade @glyph He starts to lose me in the second paragraph: 'if you look into any one of the arguments, the details are (shocking) a little more complicated'. He implies here that the other side doesn't engage with the detail and nuance but he does. This does not seem to set the stage for good-faith engagement. 1/3
@xgranade @glyph The ethical framework he sets out is unsophisticated. He condemns utilitarianism (the claim that this is 'the dominant ethics pretty much everywhere' is... well, a claim) and consequentialism, but his own ethical framework is itself consequentialist. 2/3
@xgranade @glyph Importantly, it entails that there can never be any ethical obligation to act for the good: the only obligation is to avoid doing bad. It is perfectly ethical to passively allow the world to slide into a worse state. Much of the rest of what he writes can be rejected on this basis alone. 3/3

@glyph

There is no ethical imperative to not use a generative model; that depends on whether the particular use will cause harm or not.
There is an ethical imperative to deny “AI” (the industry) power.
So how does one deny “AI” power? Well if I knew that, I assure you we wouldn’t be in this mess. But I can tell you some actions I’m taking.

Apparently, I need to formalize my ethical framework before I can fully respond, but I did like the second part of this.

I don't think this massive effort post properly considers how and why the systems powering these models were built, the human exploitation and other the borderline illegal behavior.

@glyph I wonder whether you would enjoy some of the ethical discussions coming out of Indigenous language revitalization tech. Lotta passionate AI critics who very selectively create and use generative text and text-to-speech tools.