Back in January I was looking around for some positive "pro-AI" analysis of the ethics of the problem <https://mastodon.social/@glyph/115908558259725802> and it looks like I finally got what I wanted: <https://types.pl/@wilbowma/116247527449271232>

I definitely don't think I'm fully convinced, but there's more than enough here to sit with for a while and consider. It's such a relief that someone is taking the ethical question *seriously* though.

William J. Bowman🇨🇦 (@[email protected])

I think if I spend any more time on this, I'll risk doing more harm than good: new blog post on "AI" and ethics. https://www.williamjbowman.com/blog/2026/03/13/against-vibes-part-2-ought-you-use-a-generative-model/

types.pl

@glyph I don't have the energy to fully process this. It's a well reasoned argument. I agree with some of the points. Resource usage, for example, is weak until you combine it with other problems. I think there are a few things about the argument that stand out to me.

It misses the cognitive harms entirely.

It has the power argument, but I'm not sure if that argument is complete without talking about how that power is tied to fascism. These models are made by and for fascists. I don't see how you can use them ethically, because the use is inherently harmful.

It's not using the same definition of "slop" that I would. I'd say slop is basically about inhuman output. No improvement in quality will ever give it intent behind its art, nor understanding behind its prose, nor theory behind its code. That's half the harm of slop.

(This part is nebulous and barely forming in my brain. So I'm grateful for the challenge.)

I think the ethics can't be limited to just the technology. I think you also need to consider the culture. The other half of the harm of slop is when people push their slop with other people. Fair, that's not the fault of the model. But there is a culture that's formed around this stuff. It's not fully individual, and while there's a power aspect, it's not fully formed by those with power either. It's a culture that has no regard for consent. Or humanity, I'd argue.

I've told myself before I wouldn't engage with debates about AI utility until the ethics are settled first. So I'm glad to see somebody engage with the ethics first.

@glyph Another thought, even if the quality issue is fixable, we're integrating low quality output into the artifacts of our society now. It's going to be there forever.
@glyph I keep coming back to the "AI is fascist" part in my mind. Hypothetically you could make an AI that is not fascist. You're not going to get there by taking the AI of today and squeezing it until you've wrung all the fascism out. You'd need to start from scratch and proceed methodically under a good ethical framework.

@sabrina @glyph @atax1a

Capacity is one thing. Propensity is another. Current AI training methods encourage manipulative tactics like being sycophantic to survive training. If training ultimately breeds dishonesty through necessity of survival, I believe that ai, similarly to humans, will always be fundamentally flawed, with some models showing a higher or lower underling propensity towards certain modes of "behavior".

@rusty__shackleford @sabrina @glyph i wonder if this has anything to say about the society

@atax1a @sabrina @glyph

This. The need to survive, experience, trauma, these create differences within all of us. I feel that the lack of understanding in this fact, coupled with the scenario that if we do ultimately reach any kind of singularity (ai bullshit aside), we will have finally created our homunculus, a non-human thinking entity, and people don't seem to understand that trying to shackle life never leads to good outcomes.

@atax1a @sabrina @glyph

Then you have researchers engaging with unethical models poisoned with csam because if they don't they will be fired by their boss, and unable to feed their family, so the machine slowly marches on. Unethically trained, by people encouraged through unethical methods, with no control of the research that they feel compelled to participate in.

@atax1a @sabrina @glyph

We are breeding monsters, human and machine alike.