Files

Abstract

The vulnerability of deep-learning-based face-recognition (FR) methods, to presentation attacks (PA), is studied in this study. Recently, proposed FR methods based on deep neural networks (DNN) have been shown to outperform most other methods by a significant margin. In a trustworthy face-verification system, however, maximising recognition-performance alone is not sufficient – the system should also be capable of resisting various kinds of attacks, including PA. Previous experience has shown that the PA vulnerability of FR systems tends to increase with face-verification accuracy. Using several publicly available PA datasets, the authors show that DNN-based FR systems compensate for variability between bona fide and PA samples, and tend to score them similarly, which makes such FR systems extremely vulnerable to PAs. Experiments show the vulnerability of the studied DNN-based FR systems to be consistently higher than 90%, and often higher than 98%.

Details

Actions

Preview