I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics.

https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

Against Vibes Part 2: Ought You Use a Generative Model

Since the wide-spread availability and forced deployment of generative models, people have argued about the ethics of using them. Many arguments have been presented to argue that they're _bad_: they use too much electricity, boil the oceans, massively inf...

@wilbowma This was good reading. Perhaps it's time I finally wrote my thoughts on this.

I don't completely agree with some of your arguments, but not in the way, say, I don't agree with something like the people running the current US government.

However, it is was nice to learn you share essentially the same philosophic outlook on life that I do.

@wilbowma I do agree, however, that the power aspect is the most important part and, to me, the core of why I do not use these tools.
@GeoffWozniak I'd be shocked if anyone agree with me completely. I'd be interested to hear your thoughts. I don't think my mind is totally settled on the matter, but.. it is more settled having written this

@wilbowma The "arguing with yourself" kind of style is also how I tend to approach most problems.

I have had visceral reactions to recent "AI" stuff since it hit the scene because of the power problem, and I've had to consider other things carefully because I researched ways to make a "coding assistant".

@wilbowma this put into words so many of the reasons why i have felt uncomfortable with many of the anti llm arguments
@wilbowma as someone who typically avoids discussions of ethical frameworks because they feel incredibly incorrect to me in ways i can't really articulate, i liked the discussion of ethical frameworks here
@wilbowma thanks for putting this out, i especially agree with a lot of your conclusions at the end

@wilbowma That was an interesting read. I agree that power probably is the biggest issue.

Though I think we also can't ignore the slop & related skills problem, even if it might be fixed eventually (which i doubt will happen soon). I believe that collectively lowering standards will lead to harm, even if we can't predict exactly how that harm will manifest. We have reason to believe that e.g. poorly understood software will have exploits, poorly researched writing will have bias, etc.

@wilbowma To make it into more of an individual issue: If we could use generative models carefully to ensure we aren't creating lower quality work or doing anything harmful, that would be better. But the issue is that as the user of the generative model, we may not be able to recognize problem(s) with the output. We see this a lot with people using generative models outside their area of expertise, but even an expert will misremember important details from time to time.
@typeswitch @wilbowma there's a point where slop becomes epistemic poison, and mass producing it is even more harmful than what we've seen in medicine and psychiatry so far

@wilbowma fascinating read; I'm curious if your university ever assigns you to teach the accreditation-required tech ethics course, I think the students could benefit from your perspective.

Do you think your framework applies differently to building the tools than using them? On the one hand, the "reasonable expectation based on reasonable knowledge" test is another level abstracted when the question is "will the likely users of the tool I'm building use it to cause harm?" (or, perhaps more in your framework, "is the availability of this tool likely to cause harm?"). As an example, I think it's morally wrong to build missiles for Northrup, even if you're not the one firing them, because "people die" is the whole point of the tool; I'm not sure how to define the category, but I think the devs at xAI building the Grok porn machine are arguably working in the same ethical category. It's maybe clearer in the AI industry power argument -- it's still human labour and expertise building these tools, and if it's *your* labour and expertise than you're part of the AI industry power machine. (Full disclosure, I spent two years working on an AI code-generation tool; I'm not proud of the work, and eventually "I am personally contributing to the devaluation of labour in my own industry" was a prominent part of why I quit.)

@wilbowma This is a very good piece, actually!
@wilbowma THANK YOU SO MUCH! I totally disagree and I think you are skipping over several of the arguments that I find most compelling to abstain, but this is a VERY well argued piece that I am going to need to think about hard about and actually presents good and considered arguments for LLMs, and it is SO hard finding stuff like this! I really really appreciate the effort that went into this, and I wish you the best of luck in your effort to stop spending time on this, I wish I could too!

@wilbowma

Thanks for that - interesting!

@wilbowma I don't see the addendum you mention in the replies
@wilbowma

Good one on seeing the labour vs capital argument.

But the rest is just a coping mechanism that paying/using ai is still okay.

Blink twice if your manager forced you into this.

@wilbowma I think this is a useful post and I mostly agree with your conclusions, although I am not a fan of the structure.

I think you phrased every argument except your own in a way that is very easy to refute by making them about the technology (and not the AI industry), only presenting their "real" versions in the section about power. But this is what (most) people really mean by these arguments, and I think it sounds a bit disingenuous to pretend otherwise.

I also have other complaints that were already raised here, but overall thank you for writing this.

@wilbowma i think a big thing you're missing about the "intellectual property" arguments is the more compelling ones are about nonconsensual enclosure and devaluation of human creative labor, not about protecting legal copyright in a vacuum
@chrisamaphone @wilbowma One should definitely be able to say that copyright is not a great framework to support a society where creative work can be rewarded but at the same time that the way the AI industry is organised is extremely harmful. (But I guess in your words @willbowma this is part of the power argument? Although I don't see how you can argue about copyright without arguing about power: copyright is in principle a tool to give power to creators...)
@mevenlennonbertrand @wilbowma it may not be a great framework but yeah given that it's the one we're in, it is one of the very few tools individual creators actually have to object to exploitation / seek fair compensation
@chrisamaphone @mevenlennonbertrand @wilbowma And it's describing what amounts to the concentration of intellectual capital, which should be clear enough for anybody who accepts the necessity of transitional demands