In this context I'm more worried about the disdain for the userbase, but I do think there's some kind of moral both-sides nonsense coming from programmers on this.
There's a lot of "I share concerns, but-"
Paul has said that he's super-concerned about the concentration of power with AI models and that he hopes local models will change that.
But also like.. he admits they're not usable, and he's not using them. So.. he's just hoping that changes, and in the meantime..
Like whenever I hear somepony say that local models will solve the problems with power concentration into big tech, I want to ask them, "if you're wrong, and if they don't, are you going to stop using the hosted models?"
"Do you actually believe that will happen? Enough that you'd wait for it?"
I view a lot of the "I share concerns" talk as purely performative. None of the programmers I see saying that use AI any differently at all from any of the open boosters and evangelists. They're identical - they just occasionally write comments and blog posts.
I think that a lot of programmers have convinced themselves that if they *say* they have concerns about the moral implications of using these models, then that means they're good. They don't have to do anything about those concerns, they don't have to stop using Claude. Just be unhappy about it.
It's impossible to fully disengage from unethical systems. But somepony actively going out of their way to argue that the system *should be used*, and actively arguing against the critters who are saying they want to disengage from it - I think it's fair to say that suggests some things.
And even their reframing towards ethics is designed to flatten debate, to flatten discourse about the effect of these models.
They say that they want better discussion, but they don't. They're fucking obsessed with reducing all of it down to "can it build a React website?"
To a programmer like Paul, if he likes using agents - whether they're helpful to the economy, to the average programmer, whether we know if they're de-skilling him, whether we know the effects on energy, society, power, or their role in fascism and surveillance..
He'll concede that all of that is bad, but none of that is as important to him as whether or not he *likes using the tool*.
If he likes the tool, then not using it is off the table entirely.
And I'm supposed to pretend that's adding nuance to the debate. It's not!