Defining AI is a regulatory whac-a-mole.
Every time policymakers pin down what AI is, companies pivot to avoid scrutiny. Dr. Suresh Venkatasubramanian explains why this makes accountability so hard.
🔗 Listen here: https://youtu.be/GQiFnpK7Wyo
Defining AI is a regulatory whac-a-mole.
Every time policymakers pin down what AI is, companies pivot to avoid scrutiny. Dr. Suresh Venkatasubramanian explains why this makes accountability so hard.
🔗 Listen here: https://youtu.be/GQiFnpK7Wyo
Algorithmic “Fairness”—Or Just a New Kind of Bias?
#AlgorithmicFairness #TechEthics #AIandSociety #DigitalJustice #TheInternetIsCrack
Time: 9am PT / 12pm ET Date: Thursday, May 30, 2024 We invite you to join us on Thursday, May 30 at 9am PT / 12pm ET / 4pm GMT for a PAI Partner Roundtable focused on Algorithmic Fairness and Demographic Data. This one-hour, partner exclusive meeting will include presentations from Eliza McCullough (Partnership on […]
I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.
I'd like to see some examples touching the topics ( #AIethics #AIResearch #responsibleAI #ML #MLeval #AlgorithmicFairness, etc.)!
Anyone knows anything about
Goethe's Fellowship-programme AI & Ethics? Or do you know anyone who could give more info? 👀
Importantly, standard #algorithmicfairness solutions are strictly limited in what they can achieve in this regard: if the statistical relationship between inputs and outputs is simply more noisy in some group, no amount of "fair learning" can fix this!
In the paper (co-authored with Sune Holm, @melanieganzben1, Aasa Feragen), we discuss many more concrete medical examples of the different sources of bias, and we propose some tentative solution approaches. 6/N
A couple of years ago, I wrote about the seeds of bad algorithmic-assisted decision making products as I was reading "Why We Sleep".
As a friend was was reflecting on the book, I shared with her my views that I wrote in this blog post 👇
https://www.onceupondata.com/post/how-do-harmful-algorithms-evolve/
How to fix this? The consequentialist framework (CF) to algorithmic fairness foregrounds the results of decisions, rather than properties of the prediction.
One starts by identifying the utility of different possible outcomes, eg efficiency and equity. Optimal decision policies can be derived with Linear Programming that uses stakeholder preferences.
This approach has advantages over static experimental designs (eg randomized trials)
The latest turn in the #algorithmicfairness debate is "leveling up":
https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/
Striking:
"Technical solutions are often only a Band-aid to deal with a broken system. Improving access to health care, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality."
Not: no technical solution at all but only within - may I say - a scoiotechnical system.
Excerpts from the article:
The majority of algorithms developed to enforce “algorithmic fairness” were built without #policy and societal contexts in mind.
Our motivation for pursuing fairness is to improve the situation of a historically disadvantaged group.
When we build AI systems to make decisions about people's lives, our design decisions encode implicit value judgments about what should be prioritized.
Technical solutions are often only a Band-aid to deal with a broken system. Improving access to #HealthCare, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality.
#AI systems make life-changing decisions. Choices about how they should be fair, and to whom, are too important to treat #fairness as a simple mathematical problem to be solved.
#AlgorithmicFairness #MedicalSystem #AIEthics #FairML #ArtificialIntelligence
Article:
HealthCare #Bias Is Dangerous. But So Are ‘Fairness’ #Algorithms
Paper:
The Unfairness of Fair #MachineLearning: Levelling down and strict egalitarianism by default