"Galton and his fellow eugenicist / protegee Karl Pearson were not directly involved in the development of early computational machines. But Galton’s foundational work with multidimensional modeling — a technique he used while measuring the attractiveness of African and European women — shaped Pearson’s thinking as he developed statistical tools like logistic regression, which is one of the fundamental components of modern machine learning."

https://www.theverge.com/entertainment/897923/ghost-in-the-machine-valerie-veatch-interview

The gen AI Kool-Aid tastes like eugenics

Ghost in the Machine — out on Kinema March 26th — director Valeria Veatch speaks with The Verge about gen AI’s roots in eugenics.

The Verge
@timnitGebru I'm no mathologist, but this seems like a stretch. Is the argument that mathematics used or developed by crappy people is itself suspect for that reason? Is the math not able to be valid on its own, if it was developed or associated with crappy people?
@wesdym @timnitGebru We’re not talking about theoretical mathematics here, we’re talking about technology. And yes, all technology is political. It is designed, funded, and developed (in the specific way that it is developed), shaped by the ideologies and success criteria of those who design, fund, and develop it.

@aral "Statistical tools" sounds an awful lot like mathematics to me.

But sure, whatever. Go put some more smart-sounding bumper-stickers on your car and smugly applaud yourself, I guess.

@wesdym @aral Maybe read the rest of the article. The issue is clearly not the algorithm but the way they think they can use it.
In statistic, you consider outliers to belong in the dataset, even if they are flawed. They will eventually be scrapped by their low weight. The issue with this is that :
1. some outliers are about actual discriminated cases or person.
2. Actual discrimination might not be an outlier, because of how well it is spread out in the dataset.

@wesdym @aral Now that would mean you need to curate your datasets thoroughly. But that would mean doing a lot of extra work they don't believe is relevant, compared to growing datasets size. Quantity over quality.

Now that doesn't even fix the issue of normalization, another discriminatory behavior, which pushes this algorithm towards the optimal spot in order to give a value that fits, even though the answer would be unoriginal.

@mehdi_benadel @wesdym @aral Whoa I'm gonna go ahead and block this person. I was too generous in my response before seeing this additional nonsense.