Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Preprints and Working Papers
  4. Metrizing Fairness
 
preprint

Metrizing Fairness

Rychener, Yves  
•
Taskesen, Bahar  
•
Kuhn, Daniel  
2022

We study supervised learning problems for predicting properties of individuals who belong to one of two demographic groups, and we seek predictors that are fair according to statistical parity. This means that the distributions of the predictions within the two groups should be close with respect to the Kolmogorov distance, and fairness is achieved by penalizing the dissimilarity of these two distributions in the objective function of the learning problem. In this paper, we showcase conceptual and computational benefits of measuring unfairness with integral probability metrics (IPMs) other than the Kolmogorov distance. Conceptually, we show that the generator of any IPM can be interpreted as a family of utility functions and that unfairness with respect to this IPM arises if individuals in the two demographic groups have diverging expected utilities. We also prove that the unfairness-regularized prediction loss admits unbiased gradient estimators if unfairness is measured by the squared L2-distance or by a squared maximum mean discrepancy. In this case the fair learning problem is susceptible to efficient stochastic gradient descent (SGD) algorithms. Numerical experiments on real data show that these SGD algorithms outperform state-of-the-art methods for fair learning in that they achieve superior accuracy-unfairness trade-offs—sometimes orders of magnitude faster. Finally, we identify conditions under which statistical parity can improve prediction accuracy.

  • Details
  • Metrics
Type
preprint
ArXiv ID

2205.15049v1

Author(s)
Rychener, Yves  
Taskesen, Bahar  
Kuhn, Daniel  
Date Issued

2022

Subjects

Algorithmic fairness

•

Supervised learning

•

Stochastic gradient descent

Editorial or Peer reviewed

NON-REVIEWED

Written at

EPFL

EPFL units
RAO  
Available on Infoscience
May 31, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/188197
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés