Considered by Credo AI as one of the six key principles of Responsible AI.
Fairness takes seriously that AI systems ultimately automate the distribution of benefits and harms amongst people and that it is important that we consider the nature of that distribution. There is a lot to this topic, but here we will focus on the highest level of distinction made between individual and group fairness.
Group fairness is more commonplace, which requires answering the question: what is a group? For most applications, “groups” are taken to correspond to legally protected characteristics. For instance, “race,” as defined by census definitions, may define a group. While legal definitions are relevant, it is prudent to think carefully about what groups are relevant for a particular use case.
An important finding is that it is often impossible to satisfy both individual and group fairness at the same time, except in specific circumstances. This means that each machine learning use case requires thought to figure out what fairness means in that specific case. There is no single fairness metric to optimize that works all the time.
Researchers have described different worldviews to help practitioners develop their own fairness perspective. While it’s beyond the scope of this short glossary to tell you how these worldviews should affect downstream decisions, they’ll lay the groundwork.
We're all Equal (WAE) assumes that different outcomes are related to structural bias or unfairness in the data generation process.
1
2
3
4
5
6
7
8
9
12
13
16
18
19
20