Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Efficient local linearity regularization to overcome catastrophic overfitting
 
conference paper not in proceedings

Efficient local linearity regularization to overcome catastrophic overfitting

Abad Rocamora, Elias  
•
Liu, Fanghui  
•
Chrysos, Grigorios  
Show more
2024
12th International Conference on Learning Representations (ICLR 2024)

Catastrophic overfitting (CO) in single-step adversarial training (AT) results in abrupt drops in the adversarial test accuracy (even down to 0%). For models trained with multi-step AT, it has been observed that the loss function behaves locally linearly with respect to the input, this is however lost in single-step AT. To address CO in single-step AT, several methods have been proposed to enforce local linearity of the loss via regularization. However, these regularization terms considerably slow down training due to Double Backpropagation. Instead, in this work, we introduce a regularization term, called ELLE, to mitigate CO effectively and efficiently in classical AT evaluations, as well as some more difficult regimes, e.g., large adversarial perturbations and long training schedules. Our regularization term can be theoretically linked to curvature of the loss function and is computationally cheaper than previous methods by avoiding Double Backpropagation. Our thorough experimental validation demonstrates that our work does not suffer from CO, even in challenging settings where previous works suffer from it. We also notice that adapting our regularization parameter during training (ELLE-A) greatly improves the performance, specially in large ϵ setups. Our implementation is available in https://github.com/LIONS-EPFL/ELLE.

  • Files
  • Details
  • Metrics
Type
conference paper not in proceedings
Author(s)
Abad Rocamora, Elias  
Liu, Fanghui  
Chrysos, Grigorios  
Olmos, Pablo M.
Cevher, Volkan  orcid-logo
Date Issued

2024

Total of pages

30

Subjects

ML-AI

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIONS  
Event nameEvent placeEvent date
12th International Conference on Learning Representations (ICLR 2024)

Vienna, Austria

May 7-11, 2024

Available on Infoscience
July 1, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/208915
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés