Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Arbitrary Decisions are a Hidden Cost of Differentially Private Training
 
conference paper

Arbitrary Decisions are a Hidden Cost of Differentially Private Training

Kulynych, Bogdan  
•
Hsu, Hsiang
•
Troncoso, Carmela  
Show more
January 1, 2023
Proceedings Of The 6Th Acm Conference On Fairness, Accountability, And Transparency, Facct 2023
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT)

Mechanisms used in privacy-preserving machine learning often aim to guarantee differential privacy (DP) during model training. Practical DP-ensuring training methods use randomization when fitting model parameters to privacy-sensitive data (e.g., adding Gaussian noise to clipped gradients). We demonstrate that such randomization incurs predictive multiplicity: for a given input example, the output predicted by equally-private models depends on the randomness used in training. Thus, for a given input, the predicted output can vary drastically if a model is re-trained, even if the same training dataset is used. The predictive-multiplicity cost of DP training has not been studied, and is currently neither audited for nor communicated to model designers and stakeholders. We derive a bound on the number of re-trainings required to estimate predictive multiplicity reliably. We analyze-both theoretically and through extensive experiments-the predictive-multiplicity cost of three DP-ensuring algorithms: output perturbation, objective perturbation, and DP-SGD. We demonstrate that the degree of predictive multiplicity rises as the level of privacy increases, and is unevenly distributed across individuals and demographic groups in the data. Because randomness used to ensure DP during training explains predictions for some examples, our results highlight a fundamental challenge to the justifiability of decisions supported by differentially-private models in high-stakes settings. We conclude that practitioners should audit the predictive multiplicity of their DP-ensuring algorithms before deploying them in applications of individual-level consequence.

  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3593013.3594103
Web of Science ID

WOS:001062819300131

Author(s)
Kulynych, Bogdan  
Hsu, Hsiang
Troncoso, Carmela  
Calmon, Flavio du Pin
Corporate authors
ASSOC COMPUTING MACHINERY
Date Issued

2023-01-01

Publisher

Assoc Computing Machinery

Publisher place

New York

Published in
Proceedings Of The 6Th Acm Conference On Fairness, Accountability, And Transparency, Facct 2023
ISBN of the book

978-1-4503-7252-7

Start page

1609

End page

1623

Subjects

Technology

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
SPRING  
Event nameEvent placeEvent date
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT)

Chicago, IL

JUN 12-15, 2023

FunderGrant Number

Swiss National Science Foundation

200021-188824

US National Science Foundation

CAREER 1845852

Swiss National Science Foundation (SNF)

200021_188824

Available on Infoscience
February 14, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/203673
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés