"Learning to Be Fair: A Consequentialist Approach to Equitable Decision-Making"

While designing fair machine learning systems an approach could be to ensure parity in error rates across race, gender etc. It turns out to really matter how we define this statistically: some strategies might sound fair but ignore downstream effects and can cause unexpected harm to the very groups we try to protect.

đź”—https://arxiv.org/abs/2109.08792

#MonthOfArxiv #bias #ethicalAI #machinelearning

Learning to be Fair: A Consequentialist Approach to Equitable Decision-Making

In an attempt to make algorithms fair, the machine learning literature has largely focused on equalizing decisions, outcomes, or error rates across race or gender groups. To illustrate, consider a hypothetical government rideshare program that provides transportation assistance to low-income people with upcoming court dates. Following this literature, one might allocate rides to those with the highest estimated treatment effect per dollar, while constraining spending to be equal across race groups. That approach, however, ignores the downstream consequences of such constraints, and, as a result, can induce unexpected harms. For instance, if one demographic group lives farther from court, enforcing equal spending would necessarily mean fewer total rides provided, and potentially more people penalized for missing court. Here we present an alternative framework for designing equitable algorithms that foregrounds the consequences of decisions. In our approach, one first elicits stakeholder preferences over the space of possible decisions and the resulting outcomes--such as preferences for balancing spending parity against court appearance rates. We then optimize over the space of decision policies, making trade-offs in a way that maximizes the elicited utility. To do so, we develop an algorithm for efficiently learning these optimal policies from data for a large family of expressive utility functions. In particular, we use a contextual bandit algorithm to explore the space of policies while solving a convex optimization problem at each step to estimate the best policy based on the available information. This consequentialist paradigm facilitates a more holistic approach to equitable decision-making.

arXiv.org

There are numerous domains where these algorithms help make decisions: in banking for loans, in criminal justice for bail, in healthcare for allocating limited resources.

Designing fair algorithms can exclude protected attributes (eg race and gender) or constrain the predictive model to yield similar error rates.

This has unexpected consequences: e.g. gender-blind criminal risk assessments overestimate the risk that female defendants recidivate, leading to increased detention rates for women

Consider helping individuals attend appointments (like going to court) with a limited budget: there needs to be a decision who to allocate resources to.

A natural approach could be to prioritize those with the largest estimated effect per dollar.

However, this implicitly prioritizes those closest to the courthouse—for whom rides are typically less expensive. In a case in the Santa Clara County, this strategy would end up spending 7.4$ on white clients vs 5.38$ for Vietnamese.

#EthicalAI

How to fix this? The consequentialist framework (CF) to algorithmic fairness foregrounds the results of decisions, rather than properties of the prediction.

One starts by identifying the utility of different possible outcomes, eg efficiency and equity. Optimal decision policies can be derived with Linear Programming that uses stakeholder preferences.

This approach has advantages over static experimental designs (eg randomized trials)

#EthicalAI #MonthOfArxiv #AlgorithmicFairness

The paper shows that

"using adaptive experimental designs with our framework yields better outcomes for participants during learning, and often more quickly identifies higher utility decision policies for future use, compared to static experimental approaches like randomized control trials."

which should be an important result for anyone that cares about fairness.

The main takeaway is that causal definitions of algorithmic fairness lead to Pareto-dominated policies.

In plain English, this means: whatever your preference (more efficiency or more equity) there will always be another strategy to is more satisfying to everyone involved.

Or, even simpler: this is bad for everyone involved.

#EthicalAI #MonthOfArxiv #machineLearning