Fairness in Machine Learning

Machine Learning fairness is directly related to almost all fields where Machine Learning can be applied:

  • Autonomous machines
  • Job application workflow
  • Predictive models for the justice system
  • Online shopping recommendation systems
  • etc.

Many of the causes in ML unfairness or bias can be tracked to the original training data. Some common causes include:

  • Skewed observations
  • Tainted observations
  • Limited features
  • Sample size disparity
  • Proxies

Some algortihms discussed in these pages:

Group fairness

Group fairness metrics are measures that assess the fairness of a decision-making process or outcome for different groups within a population. These metrics are used to evaluate the fairness of systems or policies that have an impact on various groups, such as race, gender, age, or other characteristics. Group fairness metrics can help identify potential biases in decision-making processes and ensure that outcomes are just and equitable for all individuals.

Some common types of group fairness metrics include:

  • Statistical Parity: This metric assesses whether the proportion of positive outcomes (e.g. being approved for a loan) is the same for all groups.
  • Demographic parity: This metric assesses whether the probability of a positive outcome is the same for all groups.
  • Equal opportunity: This metric assesses whether the probability of a positive outcome is the same for individuals from different groups who have the same qualifications or characteristics.
  • Equalized odds: This metric assesses whether the true positive rate and false positive rate are the same for all groups.
  • Predictive parity: This metric assesses whether the error rates for different groups are the same, given the same predicted probability of a positive outcome.

It is important to note that group fairness metrics are not a substitute for addressing the root causes of inequality, but they can help identify and mitigate potential biases in decision-making processes.

Statistical Parity

There are several different types of statistical parity metrics that can be used to assess the fairness of a decision-making process or outcome for different groups within a population. Some common types of statistical parity metrics include:

  • Group Statistical Parity: This metric assesses whether the proportion of positive outcomes (e.g. being approved for a loan) is the same for all groups.
  • Statistical parity difference (SPD), measures the difference between the proportion of positive outcomes for two groups.
  • Disparate Impact Ratio (DIR), measures the ratio between the proportion of positive outcomes for two groups.
  • Subgroup statistical parity: This metric assesses whether the proportion of positive outcomes is the same for subgroups within a larger group. For example, this could be used to assess the fairness of a hiring process for men and women within a particular job category.
  • Individual statistical parity: This metric assesses whether the probability of a positive outcome is the same for all individuals, regardless of their group membership.
  • Pairwise statistical parity: This metric assesses whether the probability of a positive outcome is the same for all pairs of groups. For example, this could be used to compare the probability of a positive outcome for men and women, as well as for men and people of other gender identities.

It is important to note that no single statistical parity metric is a perfect measure of fairness, and different metrics may be more or less appropriate depending on the specific context and goals of the evaluation. It may also be helpful to use a combination of different statistical parity metrics in order to get a more comprehensive understanding of the fairness of a decision-making process or outcome.

Group Statistical Parity

Statistical Parity Difference (SPD) and Group Statistical Parity are two different group fairness metrics that are used to assess the fairness of a decision-making process or outcome for different groups within a population.

Group statistical parity measures whether the proportion of positive outcomes (e.g. being approved for a loan) is the same for all groups. For example, if the proportion of race A applicants who are approved for a loan is 50%, and the proportion of race B applicants who are approved is also 50%, then the loan approval process could be considered fair according to this metric.

Statistical parity difference

Statistical parity difference (SPD), on the other hand, measures the difference between the proportion of positive outcomes for two groups. It is often used to assess the fairness of a decision-making process or outcome where there are two groups of interest, such as men and women or people of different racial groups. SPD is calculated as the difference between the proportion of positive outcomes for one group and the proportion of positive outcomes for the other group.

One key difference between group statistical parity and SPD is that group statistical parity assesses fairness for all groups within a population, while SPD is specifically designed to compare the fairness of two groups. Group statistical parity is also based on proportions, while SPD is based on the difference between proportions.

For example, consider a credit approval process where 60% of white applicants are approved and 50% of Black applicants are approved. According to group statistical parity, this process would not be considered fair, as the proportion of approved applicants is not the same for both groups. However, according to SPD, the difference between the proportions of approved applicants for the two groups is 10%, which may be considered acceptable depending on the specific context and goals of the evaluation.

The formal definiton of SPD is

$$ SPD=p(\hat{y}=1|\mathcal{D}_u)-p(\hat{y}=1|\mathcal{D}_p), $$

where $\hat{y}=1$ is the favourable outcome and $\mathcal{D}_u, \mathcal{D}_p$ are respectively the privileged and unprivileged group data.

Disparate Impact Ratio

Disparate Impact Ratio (DIR) is specifically a ratio-based statistical parity metric, as it measures the ratio of the probability of a positive outcome for one group to the probability of a positive outcome for another group. It is often used to assess the fairness of a decision-making process or outcome where there are two groups of interest, such as men and women or people of different racial groups.

The formal definition of DIR is

$$ DIR=\frac{p(\hat{y}=1|\mathcal{D}_u)}{p(\hat{y}=1|\mathcal{D}_p)}. $$