@glyph I don't have the energy to fully process this. It's a well reasoned argument. I agree with some of the points. Resource usage, for example, is weak until you combine it with other problems. I think there are a few things about the argument that stand out to me.
It misses the cognitive harms entirely.
It has the power argument, but I'm not sure if that argument is complete without talking about how that power is tied to fascism. These models are made by and for fascists. I don't see how you can use them ethically, because the use is inherently harmful.
It's not using the same definition of "slop" that I would. I'd say slop is basically about inhuman output. No improvement in quality will ever give it intent behind its art, nor understanding behind its prose, nor theory behind its code. That's half the harm of slop.
(This part is nebulous and barely forming in my brain. So I'm grateful for the challenge.)
I think the ethics can't be limited to just the technology. I think you also need to consider the culture. The other half of the harm of slop is when people push their slop with other people. Fair, that's not the fault of the model. But there is a culture that's formed around this stuff. It's not fully individual, and while there's a power aspect, it's not fully formed by those with power either. It's a culture that has no regard for consent. Or humanity, I'd argue.
I've told myself before I wouldn't engage with debates about AI utility until the ethics are settled first. So I'm glad to see somebody engage with the ethics first.