Yesterday, had an argument with an AI booster. I'm not going to link, both because I don't want to platform that and because I don't want anyone to go harass them. But what I thought was very interesting was that I asked point-blank if there was any degree to which ethical problems with LLMs could make them not want to use AI — they told me no, there was not, and implied that they evaluated AI purely on the basis of its efficacy.

I don't have time nor the inclination to argue that point with them further when it comes to AI. But I do think there's a broader point that is worth critical examination, especially as tech continues to build out surveillance, age verification, automated filtering and censoring, and other tools that do immense damage when used by authoritarians.

We *cannot* afford to evaluate tech purely based on whether it "works" or not.

AI doesn't work¹, so it's easy to forget that larger point, I suspect? That *even if* AI did work (and again, it doesn't), it still would need to be critically examined from an ethical perspective.

Failing to do so is how we have massive surveillance networks today.

___
¹Here again, referring to the wave of current hype products. Boosters love wearing the ML shit that does work as a shield against criticism.

@xgranade This has also been fascinating to me lately.

My latest blog post is specifically musing on what that calculus looks like for each person.

I always presumed that a major factor is in the belief that harms are not as bad as reported (which calls into question what sources we are viewing as authoritative) or that the current and/or future benefits to humanity are worth it (current harms are collateral damage in lieu of progress).

But for a person to be unable to imagine any scenario that would change their mind seems crazy to me.

I often ask myself what it would take to change my mind on this issue, and while I think most of those scenarios are highly improbable.. I can still imagine them.

@xgranade hmm, now that I think about it, maybe I should articulate that in a future blog post.