In fact, what we’ve described here is actually the best-case scenario where fairness can be achieved by making simple changes that impact per-group performance. In practice, fairness algorithms can behave much more radically and unpredictably. This poll found that, on average, most computer vision algorithms improve fairness by hurting all groups, such as reducing recall and accuracy. In contrast to our hypothesis, where we reduced the harm to one group, it is possible that lowering the level could directly worsen the situation of all.
Down alignment works conflicts with the goals of algorithmic justice and the broader goals of equity in society: to improve outcomes for historically disadvantaged or marginalized groups. Decreased performance for high-performing groups is self-evidently not beneficial for lower-performing groups. Moreover, lowering the level directly harm historically disadvantaged groups. Choosing to forego a benefit instead of sharing it with others indicates a lack of caring, solidarity, and a willingness to seize the opportunity to really solve the problem. This stigmatizes historically disadvantaged groups and reinforces the exclusion and social inequality that led to the problem in the first place.
When we build AI systems to make decisions about people’s lives, our design decisions encode implicit value judgments about what should be prioritized. Leveling is a consequence of the choice to measure and compensate for equity solely in terms of inequality between groups, while ignoring utility, welfare, priority, and other goods that are central to issues of equity in the real world. This is not the inevitable fate of algorithmic justice; rather, it is the result of choosing the path of least mathematical resistance rather than any overarching social, legal, or ethical reasons.
• We may continue to deploy biased systems that ostensibly benefit only one privileged segment of the population while causing serious harm to others.
• We can continue to define fairness in formalistic mathematical terms and use artificial intelligence that is less accurate for all groups and actively harmful to some groups.
• We can take action and achieve justice through “leveling up”.
We believe that leveling up is the only morally, ethically and legally acceptable way forward. The challenge for the future of fairness in AI is to create systems that are objectively fair, not just procedurally fair at the cost of leveling down. Leveling up is more of a challenge and needs to be combined with proactive steps to root out the real causes of bias in AI systems. Technical solutions are often just a patch to deal with a failed system. Improving access to health care, collecting more diverse datasets, and developing tools specifically designed to address the challenges faced by historically disadvantaged communities can help bring true justice to life.
This is a much more difficult task than simply tweaking the system to make two numbers equal between groups. This may require not only significant technological and methodological innovations, including the redesign of AI systems from scratch, but also significant social changes in areas such as access to and costs of healthcare.
It can be difficult, but a refocus on “fair AI” is necessary. Artificial intelligence systems make vital decisions. The choice of how they should be fair and to whom is too important to consider fairness as a simple mathematical problem to be solved. It is the status quo that has led to methods of justice that achieve equality through equalization. So far, we have created methods that are mathematically correct, but cannot and do not bring obvious benefits to disadvantaged groups.
This is not enough. Existing tools are seen as a solution to algorithmic justice, but so far they have not lived up to their promises. Their morally obscure implications make them less likely to be exploited and may slow down the actual resolution of these problems. What we need are systems that are fair through leveling up, that help the underperforming groups without arbitrarily harming others. This is the problem we must now solve. We need AI that is essentially, not just mathematically, fair.