What's going on here is that unrelated libertarian principles are being recoded as issues of free speech. All of sudden preventing algorithmic harm becomes leftist censorship, and the culture war is used as a bulwark against government regulation of discriminatory technologies.

"Algorithmic decisions about parole, loan approvals, interest rates, program admissions, insurance premiums, security clearance, etc. that depend on race and ethnicity? That's not discrimination, it's *speech*."

@ct_bergstrom interestingly, we got so used to algorithms discriminating based on gender and age that it's not even a topic of discussion, nor it made it to your list. 😔
@MarcinW I think that's an unfair critique given the conversations I have on a daily basis, the material I teach, and so forth. The thing about writing in 500-character posts is that when one has only a single sentence with which to offer a concrete example, one picks a single, powerful point (racial discrimination) and the fact that one doesn't include a full catalog of other problems cannot be taken as blindness to them.

@ct_bergstrom honestly, world-wide I think that algorithmic gender discrimination is BY FAR most prevelent followed by ageism.

Here in Poland I dont even recall the last time I filled in "race"/"ethnicity" in any system that would be made in Europe rather than US. Hard to discriminate on something you dont know (perhaps its one of solutions, where possible dont gather this data?)
And US having issues with algorithmic race discrimination doesnt stop gender discrimination running rampant either.

@MarcinW I literally have lectured multiple times about algorithmic gender discrimination this month. I don't know why you seem to think that I am a doubter.

As for "Here in Poland I dont even recall the last time I filled in "race"/"ethnicity" in any system that would be made in Europe rather than US. Hard to discriminate on something you dont know": are you serious?

ML systems pick up correlates of gender/race/etc. See e.g. Amazon's system that was not told gender: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Amazon scraps secret AI recruiting tool that showed bias against women

Amazon.com Inc's <AMZN.O> machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

Reuters
@ct_bergstrom ❤️
Just the impression I got from finding your post among the most popular on my server 🙂 don't take it too personally 🙂