Files

Abstract

In real-world classification problems, nuisance variables can cause wild variability in the data. Nuisance corresponds for example to geometric distortions of the image, occlusions, illumination changes or any other deformations that do not alter the ground truth label of the image. It is therefore crucial that designed classifiers are robust to nuisance variables, especially when these are deployed in real and possibly hostile environments. We propose in this paper a probabilistic framework for efficiently estimating the robustness of state-of-the-art classifiers and sampling problematic samples from the nuisance space. This allows us to visualize and understand the regions of the nuisance space that cause misclassification, in the perspective of improving robustness. Our probabilistic framework is applicable to arbitrary classifiers and potentially high-dimensional and complex nuisance spaces. We illustrate the proposed approach on several classification problems and compare classifiers in terms of their robustness to nuisances. Moreover, using our sampling technique, we visualize problematic regions in the nuisance space and infer insights into the weaknesses of classifiers as well as the features used in classification (e.g., in face recognition). We believe the proposed analysis tools represent an important step towards understanding large modern classification architectures and building architectures with better robustness to nuisance.

Details

Actions

Preview