Infoscience

Thesis

Robust image classification: analysis and applications

In the past decade, image classification systems have witnessed major advances that led to record performances on challenging datasets. However, little is known about the behavior of these classifiers when the data is subject to perturbations, such as random noise, structured geometric transformations, and other common nuisances (e.g., occlusions and illumination changes). Such perturbation models are likely to affect the data in a widespread set of applications, and it is therefore crucial to have a good understanding of the classifiers' robustness properties. We provide in this thesis new theoretical and empirical studies on the robustness of classifiers to perturbations in the data. Firstly, we address the problem of robustness of classifiers to adversarial perturbations. In this corruption model, data points undergo a minimal perturbation that is specifically designed to change the estimated label of the classifier. We provide an efficient and accurate algorithm to estimate the robustness of classifiers to adversarial perturbations, and confirm the high vulnerability of state-of-the-art classifiers to such perturbations. We then analyze theoretically the robustness of classifiers to adversarial perturbations, and show the existence of learning-independent limits on the robustness that reveal a tradeoff between robustness and classification accuracy. This theoretical analysis sheds light on the causes of the adversarial instability of state-of-the-art classifiers, which is crucial for the development of new methods that improve the robustness to such perturbations. Next, we study the robustness of classifiers in a novel semi-random noise regime that generalizes both the random and adversarial perturbation regimes. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds show in particular that we have a blessing of dimensionality phenomenon: in high-dimensional classification tasks, robustness to random noise can be achieved, even if the classifier is extremely unstable to adversarial perturbations. We show however that, for semi-random noise that is mostly random and only mildly adversarial, state-of-the-art classifiers remain vulnerable to such noise. We further perform experiments and show that the derived bounds provide very accurate robustness estimates when applied to various state-of-the-art deep neural networks and different datasets. Finally, we study the invariance of classifiers to geometric deformations and structured nuisances, such as occlusions. We propose principled and systematic methods for quantifying the robustness of arbitrary image classifiers to such deformations, and provide new numerical methods for the estimation of such quantities. We conduct an in-depth experimental evaluation and show that the proposed methods allow us to quantify the gain in invariance that results from increasing the depth of a convolutional neural network, or from the addition of transformed samples to the training set. Moreover, we demonstrate that the proposed methods identify ``weak spots'' of classifiers by sampling from the set of nuisances that cause misclassification. Our results thus provide insights into the important features used by the classifier to distinguish between classes. Overall, we provide in this thesis novel quantitative results that precisely describe the behavior of classifiers under perturbations of the data. We believe our results will be used to objectively assess the reliability of classifiers in real-world noisy environments and eventually construct more reliable systems.

Related material