A very “surprising pattern” that people don’t want to use fucking shit that doesn’t fucking work and depends on stealing people’s work and fucking lighting the mother-fucking planet on fire while feeding their fucking money into the greedy throats of billionaires.

@thomasfuchs I can relate to this sentiment, but the paper refutes that "perceived ethicality" (or "perceived capability") explains the discrepancy.

https://doi.org/10.1177/00222429251314491

@jedbrown @thomasfuchs Yeah, I think its really just: The more you know about AI, the more you know its present limitations. And you are less suspectible for the hype of future AI and more realistic about what to expect. Which isn't bad at all, even quite promising. But far away from threatening the experience of skilled workers.

Maybe exceeding the management skills of certain CEOs.

@urwumpe @thomasfuchs Note that it doesn't have to deliver on any technical capability to be effective pretext for union busting. All it takes is that investors think it's close enough relative to their love for making labor vulnerable. The industry-wide pivot into defense and surveillance is part of the same phenomenon, with even less accountability for fitness for purpose.
@jedbrown @urwumpe @thomasfuchs Yep, that's the critical piece. It doesn't need to be capable or for for purpose. It only needs to be accepted by the masses.

@winterayars @jedbrown @thomasfuchs on the other hand, it was also accepted the robots weld cars, instead of letting humans do that. Did not stop my union to become more powerful then ever since. Or resulted in a smaller workforce.

Technology can also lead to more meaningful work and better working conditions. Maybe we should not let CEOs and libertarians decide, what AI can be good for?

@urwumpe @winterayars @thomasfuchs Imagine the "welding machine" doesn't actually weld, but instead smears putty in the joint so it looks like a weld, and the welding machine company has enough influence in government that neither the welding company or the car manufacturer is liable when the cars fly into pieces on the highway. Somehow that spontaneous disassembly happens much more frequently to immigrants and queer people and brown people.

@jedbrown @winterayars @thomasfuchs Sorry, but I am not that religious. I don't believe in miracles, not for me, and not for the bad guys.

Can't you find a more grounded way how AI can separate and discriminate people, if put into the wrong hands?

@urwumpe @winterayars @thomasfuchs You're asking me to shoehorn that into the welding metaphor or are you seriously asking how "AI" systems discriminate when there is a decade of well-publicized critical literature and new instances come out every day? The systems automate discrimination even when following all the best practices in "fairness", without corporate capture, when the community is involved throughout.
https://www.technologyreview.com/2025/06/11/1118233/amsterdam-fair-welfare-ai-discriminatory-algorithms-failure/
Inside Amsterdam’s high-stakes experiment to create fair welfare AI

The Dutch city thought it could break a decade-long trend of implementing discriminatory algorithms. Its failure raises the question: can these programs ever be fair?

MIT Technology Review
@jedbrown @winterayars @thomasfuchs You don't want to turn this into a good discrimination vs bad discrimination debate, do you?