Improving Face Authetication Using Virtual Samples
In this paper, we present a simple yet effective way to improve a face verification system by generating multiple virtual samples from the unique image corresponding to an access request. These images are generated using simple geometric transformations. This method is often used during training to improve accuracy of a neural network model by making it robust against minor translation, scale and orientation change. The main contribution of this paper is to introduce such method during testing. By generating $N$ images from one single image and propagating them to a trained network model, one obtains $N$ scores. By merging these scores using a simple mean operator, we show that the variance of merged scores is decreased by a factor between 1 and $N$. An experiment is carried out on the XM2VTS database which achieves new state-of-the-art performances.
Published in ICASSP'03
Record created on 2006-03-10, modified on 2016-08-08