Files

Abstract

For many classification tasks, the ideal classifier should be invariant to geometric transformations such as changing the view angle. However, this cannot be said decisively for the state-of-the-art image classifiers, such as convolutional neural networks. Mainly, this is because there is a lack of methods for measuring the transformation invariance in them, especially for transformations with higher dimensions. In this project, we are proposing two algorithms to do such measurement. The first one, Manifool, uses the structure of the image appearance manifold for finding small enough transformation examples and uses these to compute the invariance of the classifier. Second one, the iterative projection algorithm, uses adversarial perturbation methods in neural networks to find the fooling examples in the given transformation set. We compare these methods to similar algorithms in the areas of speed and validity, and use them to show that transformation invariance increases with the depth of the neural networks, even in reasonably deep networks. Overall, we believe that these two algorithms can be used for analysis of different architectures and can help to build more robust classifiers.

Details

Actions

Preview