I don't have time nor the inclination to argue that point with them further when it comes to AI. But I do think there's a broader point that is worth critical examination, especially as tech continues to build out surveillance, age verification, automated filtering and censoring, and other tools that do immense damage when used by authoritarians.
We *cannot* afford to evaluate tech purely based on whether it "works" or not.
AI doesn't work¹, so it's easy to forget that larger point, I suspect? That *even if* AI did work (and again, it doesn't), it still would need to be critically examined from an ethical perspective.
Failing to do so is how we have massive surveillance networks today.
___
¹Here again, referring to the wave of current hype products. Boosters love wearing the ML shit that does work as a shield against criticism.
I'm not sure I'd say "AI doesn't work" anymore. It definitely doesn't "work" to the degree that the loudest boosters will claim it does. But like, I do think it's recently crossed a threshold where it can be a useful tool in the right hands.
Which I personally find very annoying since I too have moral qualms about the broader AI industry. E.g. the point about surveillance you're making I think is an important one.
@kevingranade I never want to put my AI Luddism on a pedestal and make it immune to critique... this is, for example, why I said my closed-mindedness on the subject is both temporary and a reasoned response to bad-faith DDoS attacks on discourse.
To that extent, I'm glad for critique from "my" side. But the purity culture discourse (with a few important exceptions) isn't that, it's a wedge.
@kevingranade I find that progressive moments, as a consequence of the laudable and correct willingness to self-criticize, tend to be vulnerable to wedge attacks. Fuck, as a trans person I *am* a wedge, or at least the right-wing has turned me into a wedge used to weaken opposition to violent and cruel immigration policies.
We need to get a lot better at distinguishing wedges from critiques.
@xgranade This has also been fascinating to me lately.
My latest blog post is specifically musing on what that calculus looks like for each person.
I always presumed that a major factor is in the belief that harms are not as bad as reported (which calls into question what sources we are viewing as authoritative) or that the current and/or future benefits to humanity are worth it (current harms are collateral damage in lieu of progress).
But for a person to be unable to imagine any scenario that would change their mind seems crazy to me.
I often ask myself what it would take to change my mind on this issue, and while I think most of those scenarios are highly improbable.. I can still imagine them.
The stuff which does work - is in its infancy, anyway.
How would you define "work" in this context? By this I mean what claims are being made by the hype.
@neongod @xgranade A lot depends on whether we see government regulation as an imposition by a powerful authority, or as a protection. In a democracy, government works *for* us, as per Lincoln's ddefinition. In a democracy ethics is a way we all protect each other.
We also have to be careful not to blame the victim, I think.
@xgranade The way that I personally interpret cases like this is a sort of "just world" belief. If it was truly bad, surely it would not be allowed? If there was a real problem, there would be some kind of higher power that stops it.
This also aligns with conversations where I point out that this stuff is heavily subsidized and the person says "well, it's free/cheap now", with no further elaboration. The implication is: "I will use it because I can. If it was bad to use, it would not have been usable."
If you believe that the status quo is good and just, then you don't need to consider anything outside of your immediate gratification. The consequences (to society or to your own brain) are someone else's problem. Once the rockets go up...
I am so sick of the general capitalist culture's habit of evaluating everything on whether it's "profitable" first & foremost & whether it's good, decent, healthy, or moral later, or never. It's a sick way to run a society, full stop. I will never willingly use their error prone, unethical, environmentally disasterous slop machine nor purchase a work from anyone who has. I'm fine being a Luddite on this or whatever else they want to call me. I don't respect them enough to care.
@xgranade I almost fell out of my chair when I checked out an AI training course and one of the takeaways was it is liable to hallucinate so double check all its output instead of blindly relying on it.
That advice of course makes sense, but then how exactly am I saving time by using this thing?