Explainable Face Verification via Feature-Guided Gradient Backpropagation
Recent years have witnessed significant advance- ment in face recognition (FR) techniques, with their applications widely spread in people’s lives and security-sensitive areas. There is a growing need for reliable interpretations of decisions of such systems. Existing studies relying on various mechanisms have investigated the usage of saliency maps as an explanation approach, but suffer from different limitations. This paper first explores the spatial relationship between face image and its deep representation via gradient backpropagation. Then a new explanation approach FGGB has been conceived, which provides precise and insightful similarity and dissimilarity saliency maps to explain the “Accept” and “Reject” decision of an FR system. Extensive visual presentation and quantitative measurement have shown that FGGB achieves comparable results in similarity maps and superior performance in dis- similarity maps when compared to current state-of-the-art explainable face verification approaches.
Explainable_Face_Verification_via_Feature_Guided_Gradient_Backpropagation.pdf
postprint
openaccess
n/a
553.39 KB
Adobe PDF
dddeab1819adad4b862a77626d13a7a4