Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Towards Understanding Sharpness-Aware Minimization
 
conference paper

Towards Understanding Sharpness-Aware Minimization

Andriushchenko, Maksym  
•
Flammarion, Nicolas  
January 1, 2022
International Conference On Machine Learning, Vol 162
38th International Conference on Machine Learning (ICML)

Sharpness-Aware Minimization (SAM) is a recent training method that relies on worst-case weight perturbations which significantly improves generalization in various settings. We argue that the existing justifications for the success of SAM which are based on a PAC-Bayes generalization bound and the idea of convergence to flat minima are incomplete. Moreover, there are no explanations for the success of using m-sharpness in SAM which has been shown as essential for generalization. To better understand this aspect of SAM, we theoretically analyze its implicit bias for diagonal linear networks. We prove that SAM always chooses a solution that enjoys better generalization properties than standard gradient descent for a certain class of problems, and this effect is amplified by using m-sharpness. We further study the properties of the implicit bias on non-linear networks empirically, where we show that fine-tuning a standard model with SAM can lead to significant generalization improvements. Finally, we provide convergence results of SAM for non-convex objectives when used with stochastic gradients. We illustrate these results empirically for deep networks and discuss their relation to the generalization behavior of SAM.

  • Details
  • Metrics
Type
conference paper
Web of Science ID

WOS:000899944900029

Author(s)
Andriushchenko, Maksym  
Flammarion, Nicolas  
Date Issued

2022-01-01

Publisher

JMLR-JOURNAL MACHINE LEARNING RESEARCH

Publisher place

San Diego

Published in
International Conference On Machine Learning, Vol 162
Series title/Series vol.

Proceedings of Machine Learning Research

Start page

639

End page

668

Subjects

Computer Science, Artificial Intelligence

•

Computer Science

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
TML  
Event nameEvent placeEvent date
38th International Conference on Machine Learning (ICML)

Baltimore, MD

Jul 17-23, 2022

Available on Infoscience
March 13, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/195690
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés