XAIface: a framework and toolkit for explainable face recognition
Artificial intelligence-based face recognition solutions are becoming increasingly popular. Therefore, it is crucial to fully understand and explain how these technologies work in order to make them more effective and acceptable to society. This is the goal of the CHIST-ERA project XAIface, the final results of which are reported in this article: a framework and toolkit for improving AI decision explainability, in the context of automated face recognition, through several novel methods are presented. These methods are integrated into an end-to-end face recognition demonstrator system, which facilitates studying the impact of various influencing factors and system processes on recognition performance. By doing so, we can visually explain the decisions made by the face verification pipeline for specific instances in our test set using heatmaps and locally interpretable features. Furthermore, we offer a comprehensive explanation of the end-to-end model by examining the relationship between verification failures and misclassifications of soft biometric facial traits.
2-s2.0-85218184075
2024
9798350378443
REVIEWED
EPFL
Event name | Event acronym | Event place | Event date |
CBMI 2024 | Reykjavik, Iceland | 2024-09-18 - 2024-09-20 | |