The Robustness of Deep Networks - A geometric perspective

Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.


Publié dans:
IEEE Signal Processing Magazine, 34, 6, 50-62
Année
2017
Publisher:
Piscataway, Institute of Electrical and Electronics Engineers
ISSN:
1053-5888
Mots-clefs:
Laboratoires:




 Notice créée le 2017-07-11, modifiée le 2018-09-13

Postprint:
Télécharger le documentPDF
Publisher's version:
Télécharger le documentPDF
Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)