Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning
 
conference paper

The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning

Allouah, Youssef  
•
Joshua Kazdan
•
Guerraoui, Rachid  
Show more
2025
Proceedings of the Thirteenth International Conference on Learning Representations (ICLR) 2025 [Forthcoming publication]
13th International Conference on Learning Representations (ICLR 2025)

Machine unlearning, the process of selectively removing data from trained models, is increasingly crucial for addressing privacy concerns and knowledge gaps post-deployment. Despite this importance, existing approaches are often heuristic and lack formal guarantees. In this paper, we analyze the fundamental utility, time, and space complexity trade-offs of approximate unlearning, providing rigorous certification analogous to differential privacy. For in-distribution forget data -- data similar to the retain set -- we show that a surprisingly simple and general procedure, empirical risk minimization with output perturbation, achieves tight unlearning-utility-complexity trade-offs, addressing a previous theoretical gap on the separation from unlearning "for free" via differential privacy, which inherently facilitates the removal of such data. However, such techniques fail with out-of-distribution forget data -- data significantly different from the retain set -- where unlearning time complexity can exceed that of retraining, even for a single sample. To address this, we propose a new robust and noisy gradient descent variant that provably amortizes unlearning time complexity without compromising utility.

  • Files
  • Details
  • Metrics
Type
conference paper
ArXiv ID

2412.09119

Author(s)
Allouah, Youssef  

Stanford University

Joshua Kazdan

Stanford University

Guerraoui, Rachid  

EPFL

Sanmi Koyejo

Stanford University

Date Issued

2025

Publisher

ICLR

Published in
Proceedings of the Thirteenth International Conference on Learning Representations (ICLR) 2025 [Forthcoming publication]
Subjects

machine learning

•

machine unlearning

•

privacy

•

data ownership

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent acronymEvent placeEvent date
13th International Conference on Learning Representations (ICLR 2025)

ICLR

Singapore

2025-04-24 - 2025-04-28

Available on Infoscience
March 13, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/247764
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés