Defining AI is a regulatory whac-a-mole.

Every time policymakers pin down what AI is, companies pivot to avoid scrutiny. Dr. Suresh Venkatasubramanian explains why this makes accountability so hard.

🔗 Listen here: https://youtu.be/GQiFnpK7Wyo

#AI #Ethics #TechRegulation #Podcast #AlgorithmicFairness

Algorithmic “Fairness”—Or Just a New Kind of Bias?

#AlgorithmicFairness #TechEthics #AIandSociety #DigitalJustice #TheInternetIsCrack

TODAY: Join CDT’s Miranda Bogen for a PAI Partner Roundtable on Algorithmic Fairness & Demographic Data where she will be joining Eliza McCullough, Janet Haven, and Daniel Ho. Tune in LIVE at 12 ET. #AlgorithmicFairness #AI https://cdt.org/event/pai-partner-roundtable-demographic-data-algorithmic-fairness/
PAI Partner Roundtable: Demographic Data & Algorithmic Fairness

Time: 9am PT / 12pm ET Date: Thursday, May 30, 2024 We invite you to join us on Thursday, May 30 at 9am PT / 12pm ET / 4pm GMT for a PAI Partner Roundtable focused on Algorithmic Fairness and Demographic Data. This one-hour, partner exclusive meeting will include presentations from Eliza McCullough (Partnership on […]

Center for Democracy and Technology

I need some inspiration about getting out of corporates and transitioning to non-bullshit research or non profits.

I'd like to see some examples touching the topics ( #AIethics #AIResearch #responsibleAI #ML #MLeval #AlgorithmicFairness, etc.)!

Anyone knows anything about
Goethe's Fellowship-programme AI & Ethics? Or do you know anyone who could give more info? 👀

https://www.goethe.de/aiethics

#aiethics #algorithmicfairness #aifairness

AI and Ethics

In an interdisciplinary Europe-wide approach involving input talks, discussions and practical workshops, the AI & Ethics Summer School delves into ethical issues of AI applications and provides practical tools for identifying and addressing these issues.

Importantly, standard #algorithmicfairness solutions are strictly limited in what they can achieve in this regard: if the statistical relationship between inputs and outputs is simply more noisy in some group, no amount of "fair learning" can fix this!

In the paper (co-authored with Sune Holm, @melanieganzben1, Aasa Feragen), we discuss many more concrete medical examples of the different sources of bias, and we propose some tentative solution approaches. 6/N

A couple of years ago, I wrote about the seeds of bad algorithmic-assisted decision making products as I was reading "Why We Sleep".

As a friend was was reflecting on the book, I shared with her my views that I wrote in this blog post 👇
https://www.onceupondata.com/post/how-do-harmful-algorithms-evolve/

#aiethics #aifairness #algorithmicfairness

The Seeds of Bad Data Products!

The adoption of not-so-scientific facts and the path to harmful algorithms

How to fix this? The consequentialist framework (CF) to algorithmic fairness foregrounds the results of decisions, rather than properties of the prediction.

One starts by identifying the utility of different possible outcomes, eg efficiency and equity. Optimal decision policies can be derived with Linear Programming that uses stakeholder preferences.

This approach has advantages over static experimental designs (eg randomized trials)

#EthicalAI #MonthOfArxiv #AlgorithmicFairness

The latest turn in the #algorithmicfairness debate is "leveling up":

https://www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/

Striking:

"Technical solutions are often only a Band-aid to deal with a broken system. Improving access to health care, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality."

Not: no technical solution at all but only within - may I say - a scoiotechnical system.

Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms

Medical systems disproportionately fail people of color, but a focus on fixing the numbers could lead to worse outcomes.

WIRED

Excerpts from the article:
The majority of algorithms developed to enforce “algorithmic fairness” were built without #policy and societal contexts in mind.

Our motivation for pursuing fairness is to improve the situation of a historically disadvantaged group.

When we build AI systems to make decisions about people's lives, our design decisions encode implicit value judgments about what should be prioritized.

Technical solutions are often only a Band-aid to deal with a broken system. Improving access to #HealthCare, curating more diverse data sets, and developing tools that specifically target the problems faced by historically disadvantaged communities can help make substantive fairness a reality.

#AI systems make life-changing decisions. Choices about how they should be fair, and to whom, are too important to treat #fairness as a simple mathematical problem to be solved.

#AlgorithmicFairness #MedicalSystem #AIEthics #FairML #ArtificialIntelligence

Article:
HealthCare #Bias Is Dangerous. But So Are ‘Fairness’ #Algorithms

https://www-wired-com.cdn.ampproject.org/c/s/www.wired.com/story/bias-statistics-artificial-intelligence-healthcare/amp

Paper:
The Unfairness of Fair #MachineLearning: Levelling down and strict egalitarianism by default

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331652

Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms

WIRED