Deeply Vulnerable -- a study of the robustness of face recognition to presentation attacks

The vulnerability of deep-learning-based face-recognition (FR) methods, to presentation attacks (PA), is studied in this study. Recently, proposed FR methods based on deep neural networks (DNN) have been shown to outperform most other methods by a significant margin. In a trustworthy face-verification system, however, maximising recognition-performance alone is not sufficient – the system should also be capable of resisting various kinds of attacks, including PA. Previous experience has shown that the PA vulnerability of FR systems tends to increase with face-verification accuracy. Using several publicly available PA datasets, the authors show that DNN-based FR systems compensate for variability between bona fide and PA samples, and tend to score them similarly, which makes such FR systems extremely vulnerable to PAs. Experiments show the vulnerability of the studied DNN-based FR systems to be consistently higher than 90%, and often higher than 98%.


Published in:
IET (The Institution of Engineering and Technology) -- Biometrics, 0-0
Year:
2017
Publisher:
Hertford, Inst Engineering Technology-Iet
ISSN:
2047-4938
Note:
Accepted on 29-Sept-2017
Laboratories:




 Record created 2017-11-19, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)