Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl
@glyph Calling that post pro-AI seems a stretch though. While he said that he doesn't think individuals merely using LLMs is unethical, he does think that doing anything that increases the AI companies' power is harmful. So he doesn't spend money on the centralized models (or at least not much), but he does use them some, primarily (IIUC) for the purpose of exposing the limitations in what they can actually do. Maybe I missed something though.
@matt It's a considered refutation of 4 out of 5 pillars of the anti-AI argument, plus an explicit declaration that some level of usage of the products is fine, which is the _most_ pro-AI argument I've yet seen at anything approaching this level of detail. It's still very negative on the industry but it seems to hold out a pretty robust hope that the technology is going to be useful somehow and explicitly says that using it is OK