The Robustness of Deep Networks - A geometric perspective
Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.
35msp01-errata-2775165.pdf
Postprint
openaccess
856.63 KB
Adobe PDF
8a0805a234fd7a731c39f85efb1a6d9d
spm_preprint.pdf
Publisher's version
openaccess
5.44 MB
Adobe PDF
a49511d099ce37c20965e2efaf88d671