Improving Face Authetication Using Virtual Samples

In this paper, we present a simple yet effective way to improve a face verification system by generating multiple virtual samples from the unique image corresponding to an access request. These images are generated using simple geometric transformations. This method is often used during training to improve accuracy of a neural network model by making it robust against minor translation, scale and orientation change. The main contribution of this paper is to introduce such method during testing. By generating $N$ images from one single image and propagating them to a trained network model, one obtains $N$ scores. By merging these scores using a simple mean operator, we show that the variance of merged scores is decreased by a factor between 1 and $N$. An experiment is carried out on the XM2VTS database which achieves new state-of-the-art performances.


Published in:
IEEE International Conference on Acoustics, Speech, and Signal Processing, 40
Presented at:
IEEE International Conference on Acoustics, Speech, and Signal Processing
Year:
2003
Keywords:
Note:
IDIAP-RR
Laboratories:




 Record created 2006-03-10, last modified 2018-03-17

n/a:
Download fulltextPDF
External links:
Download fulltextURL
Download fulltextRelated documents
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)