Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Understanding and Improving Fast Adversarial Training
 
conference paper

Understanding and Improving Fast Adversarial Training

Andriushchenko, Maksym  
•
Flammarion, Nicolas  
July 6, 2020
Proceedings of the Advances In Neural Information Processing Systems 33 (NeurIPS 2020)
Advances In Neural Information Processing Systems 33 (NeurIPS 2020)

A recent line of work focused on making adversarial training computationally efficient for deep learning models. In particular, Wong et al. (2020) showed that ℓ∞-adversarial training with fast gradient sign method (FGSM) can fail due to a phenomenon called "catastrophic overfitting", when the model quickly loses its robustness over a single epoch of training. We show that adding a random step to FGSM, as proposed in Wong et al. (2020), does not prevent catastrophic overfitting, and that randomness is not important per se -- its main role being simply to reduce the magnitude of the perturbation. Moreover, we show that catastrophic overfitting is not inherent to deep and overparametrized networks, but can occur in a single-layer convolutional network with a few filters. In an extreme case, even a single filter can make the network highly non-linear locally, which is the main reason why FGSM training fails. Based on this observation, we propose a new regularization method, GradAlign, that prevents catastrophic overfitting by explicitly maximizing the gradient alignment inside the perturbation set and improves the quality of the FGSM solution. As a result, GradAlign allows to successfully apply FGSM training also for larger ℓ∞-perturbations and reduce the gap to multi-step adversarial training. The code of our experiments is available at this https URL.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

Understanding and Improving Fast Adversarial Training.pdf

Access type

openaccess

Size

1.77 MB

Format

Adobe PDF

Checksum (MD5)

79dd696376bc9a8c989e2ae3a464c599

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés