Manitest: Are classifiers really invariant?

Invariance to geometric transformations is a highly desirable property of automatic classifiers in many image recognition tasks. Nevertheless, it is unclear to which extent state-of-the-art classifiers are invariant to basic transformations such as rotations and translations. This is mainly due to the lack of general methods that properly measure such an invariance. In this paper, we propose a rigorous and systematic approach for quantifying the invariance to geometric transformations of any classifier. Our key idea is to cast the problem of assessing a classifier's invariance as the computation of geodesics along the manifold of transformed images. We propose the Manitest method, built on the efficient Fast Marching algorithm to compute the invariance of classifiers. Our new method quantifies in particular the importance of data augmentation for learning invariance from data, and the increased invariance of convolutional neural networks with depth. We foresee that the proposed generic tool for measuring invariance to a large class of geometric transformations and arbitrary classifiers will have many applications for evaluating and comparing classifiers based on their invariance, and help improving the invariance of existing classifiers.

Presented at:
British Machine Vision Conference (BMVC), Swansea, UK, September 7-10, 2015


 Record created 2015-07-23, last modified 2019-12-05

MANITESTv11 - Download fulltextZIP
MANITEST_CIFAR_data - Download fulltextZIP
Manitest_MNIST_data - Download fulltextZIP
MANITEST_code - Download fulltextZIP
MANITEST_v10beta - Download fulltextZIP
bmvc_abstract - Download fulltextPDF
bmvc_paper - Download fulltextPDF
Rate this document:

Rate this document:
(Not yet reviewed)