Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Distribution Inference Risks: Identifying and Mitigating Sources of Leakage
 
conference paper

Distribution Inference Risks: Identifying and Mitigating Sources of Leakage

Hartmann, Valentin  
•
Meynent, Leo
•
Peyrard, Maxime  
Show more
January 1, 2023
2023 Ieee Conference On Secure And Trustworthy Machine Learning, Satml
1st IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

A large body of work shows that machine learning (ML) models can leak sensitive or confidential information about their training data. Recently, leakage due to distribution inference (or property inference) attacks is gaining attention. In this attack, the goal of an adversary is to infer distributional information about the training data. So far, research on distribution inference has focused on demonstrating successful attacks, with little attention given to identifying the potential causes of the leakage and to proposing mitigations. To bridge this gap, as our main contribution, we theoretically and empirically analyze the sources of information leakage that allows an adversary to perpetrate distribution inference attacks. We identify three sources of leakage: (1) memorizing specific information about the E[Y | X] (expected label given the feature values) of interest to the adversary, (2) wrong inductive bias of the model, and (3) finiteness of the training data. Next, based on our analysis, we propose principled mitigation techniques against distribution inference attacks. Specifically, we demonstrate that causal learning techniques are more resilient to a particular type of distribution inference risk termed distributional membership inference than associative learning methods. And lastly, we present a formalization of distribution inference that allows for reasoning about more general adversaries than was previously possible.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/SaTML54575.2023.00018
Web of Science ID

WOS:001012311500008

Author(s)
Hartmann, Valentin  
Meynent, Leo
Peyrard, Maxime  
Dimitriadis, Dimitrios
Tople, Shruti
West, Robert  
Date Issued

2023-01-01

Publisher

IEEE COMPUTER SOC

Publisher place

Los Alamitos

Published in
2023 Ieee Conference On Secure And Trustworthy Machine Learning, Satml
ISBN of the book

978-1-6654-6299-0

Start page

136

End page

149

Subjects

Computer Science, Artificial Intelligence

•

Computer Science, Interdisciplinary Applications

•

Computer Science

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DLAB  
Event nameEvent placeEvent date
1st IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)

Raleigh, NC

Feb 08-10, 2023

Available on Infoscience
July 31, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/199477
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés