Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Training Provably Robust Models by Polyhedral Envelope Regularization
 
research article

Training Provably Robust Models by Polyhedral Envelope Regularization

Liu, Chen  
•
Salzmann, Mathieu  
•
Susstrunk, Sabine  
2023
Ieee Transactions On Neural Networks And Learning Systems

Training certifiable neural networks enables us to obtain models with robustness guarantees against adversarial attacks. In this work, we introduce a framework to obtain a provable adversarial-free region in the neighborhood of the input data by a polyhedral envelope, which yields more fine-grained certified robustness than existing methods. We further introduce polyhedral envelope regularization (PER) to encourage larger adversarial-free regions and thus improve the provable robustness of the models. We demonstrate the flexibility and effectiveness of our framework on standard benchmarks; it applies to networks of different architectures and with general activation functions. Compared with state of the art, PER has negligible computational overhead; it achieves better robustness guarantees and accuracy on the clean data in various settings.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TNNLS.2021.3111892
Web of Science ID

WOS:000732134700001

Author(s)
Liu, Chen  
Salzmann, Mathieu  
Susstrunk, Sabine  
Date Issued

2023

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC

Published in
Ieee Transactions On Neural Networks And Learning Systems
Volume

34

Issue

6

Start page

3146

End page

3160

Subjects

Computer Science, Artificial Intelligence

•

Computer Science, Hardware & Architecture

•

Computer Science, Theory & Methods

•

Engineering, Electrical & Electronic

•

Computer Science

•

Engineering

•

robustness

•

training

•

predictive models

•

computational modeling

•

standards

•

smoothing methods

•

recurrent neural networks

•

adversarial training

•

provable robustness

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
CVLAB  
IVRL  
Available on Infoscience
January 1, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/184111
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés