Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Preprints and Working Papers
  4. RobustBench: a standardized adversarial robustness benchmark
 
working paper

RobustBench: a standardized adversarial robustness benchmark

Croce, Francesco
•
Andriushchenko, Maksym  
•
Sehwag, Vikash
Show more
October 19, 2020

Evaluation of adversarial robustness is often error-prone leading to overestimation of the true robustness of models. While adaptive attacks designed for a particular defense are a way out of this, there are only approximate guidelines on how to perform them. Moreover, adaptive evaluations are highly customized for particular models, which makes it difficult to compare different defenses. Our goal is to establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget. This requires to impose some restrictions on the admitted models to rule out defenses that only make gradient-based attacks ineffective without improving actual robustness. We evaluate robustness of models for our benchmark with AutoAttack, an ensemble of white- and black-box attacks which was recently shown in a large-scale study to improve almost all robustness evaluations compared to the original publications. Our leaderboard, hosted at https://robustbench.github.io/, aims at reflecting the current state of the art on a set of well-defined tasks in ℓ∞- and ℓ2-threat models with possible extensions in the future. Additionally, we open-source the library https://github.com/RobustBench/robustbench that provides unified access to state-of-the-art robust models to facilitate their downstream applications. Finally, based on the collected models, we analyze general trends in ℓp-robustness and its impact on other tasks such as robustness to various distribution shifts and out-of-distribution detection.

  • Details
  • Metrics
Type
working paper
Author(s)
Croce, Francesco
Andriushchenko, Maksym  
Sehwag, Vikash
Flammarion, Nicolas  
Chiang, Mung
Mittal, Prateek
Hein, Matthias
Date Issued

2020-10-19

Subjects

Adversarial robustness

•

Machine learning

•

Deep learning

URL

arXiv

https://arxiv.org/abs/2010.09670
Editorial or Peer reviewed

NON-REVIEWED

Written at

EPFL

EPFL units
TML  
Available on Infoscience
October 24, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/172723
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés