Good point! If you are curious about the historical roots of that AI bias, here is one starting point.
A veteran and history teacher has put together an over two hour video about the history of Native Americans that is entirely avoided in a typical US high school curriculum.
I thought I knew the basics of what happened to the indigenous peoples of North America after 1491 CE, but found I didn't know more than fragments of it. Recommended!
@aidenbenton "public data sets" -from where? How is diversity "captured"? What levels of indigenous data sovereignty are recognised in survey design? Are AIs trained with community held data sets built with collaboration of people who may be subjects of actions based on that ML training? ..
Being hypothetical here cos late to the thread. Sampling bias is deliberately built in to some political polls. Hard to see other polls & AIs escaping deliberate or unconscious sampling biases
And when you train Russian dogs to explode none-Russian tanks, but all you have to practice on is Russian tanks... Which I think illustrates your point perfectly.

mic drop!
@aidenbenton The first time I asked an AI app to synthesize an illustration of two 19th-century Jews smiling at each other, I got a picture of two smiley Hasidic guys (which is fine) with noses just like those in the era's anti-Semitic caricatures (which isn't fine).
More recent attempts were better but still a teensy bit schnozzy.
@aidenbenton
and even applies to AI algorithms trained to diagnose chest xrays:
https://www.nature.com/articles/s41591-021-01595-0
tldr: the AI has a higher rate of underdiagnoses/missed diagnoses in POC/marginalised populations
Artificial intelligence algorithms trained using chest X-rays consistently underdiagnose pulmonary abnormalities or diseases in historically under-served patient populations, raising ethical concerns about the clinical use of such algorithms.
@aidenbenton Yes. AI only models what it sees. And instead of ”trying to repair” it, they would best be used as mirrors to what people are really like.
Also, public application of #ai needs 100% #transparency on the #training data. No business decision is too important to allow cultural bias to contaminate the system.
This is where #ethicists need to work together with #computer #scientists. The problem is no longer algoritmical.
@aidenbenton Want to learn about #bias in #AI? Here's some recommended reading. Enjoy!
-#Ethics Codes Are Not Enough to Curb the Danger of Bias in AI (article, BRINKNews) https://bit.ly/3uzmnBf
-DISCRIMINATING SYSTEMS
#Gender, #Race & #Power in AI (NYU, #Google Open & #Microsoft #Research) This one's my fav but be warned, it's a behemoth
at 33 pages https://bit.ly/3UJvDxk
-The role of AI in mitigating bias to enhance #diversity & #inclusion (#IBM, 16pgs) https://bit.ly/3W0Axrf