Disciplines
Law (100%)
Keywords
-
Artificial Intelligence,
Data Protection,
Privacy,
Autonomous Decision-making,
Discrimination
Face recognition has become a key technology in our society, frequently used in multiple applications, while creating an impact in terms of privacy. As face recognition solutions based on artificial intelligence (AI) are becoming popular, it is critical to fully understand and explain how these technologies work in order to make them more effective and accepted by society. In this project, we focus on the analysis of the influencing factors relevant for the final decision of an AI-based face recognition system as an essential step to understand and improve the underlying processes involved. The scientific approach pursued in the project is designed in such a way that it will be applicable to other use cases such as object detection and pattern recognition tasks in a wider set of applications. Thanks to the interdisciplinary nature of the consortium, the outcomes of XAIface will affect many fields and can be summarized as follows: (i) develop clear legal guidelines on the use and design of AI-based face recognition following the privacy-by-design approach; (ii) disentangling demographic information (age, gender, ethnicity) from the overall face representation in order to understand the impact of such traits on face recognition but also to develop demographic-free face recognition; (iii) address fairness and non-discrimination issues by following the idea of de-biasing during the training; (iv) optimize the tradeoff between interpretability and performance; (v) create tools that will allow assessment and measurement of performance and explanation of decisions of AI-based face recognition systems; (vi) analyse image coding impact to better understand how future AI-based coding solutions may be different from a recognition explainability point of view. The achieved results will feed into the implementation of an end-to-end face recognition system for studying the impact of the various system processes in terms of recognition performance and explainability. This will provide a use case study on how to perform explainability analysis with the tools provided by our project.
The goal of the CHIST-ERA project XAIface focuses on the AI-based face recognition technologies, in particular understanding and explaining how these technologies work in order to make them more effective and acceptable to society and respecting the legal framework. The final results are: a framework and toolkit for improving AI decision explainability, in the context of automated face recognition, through several novel methods. These tools are integrated into an end-to-end face recognition demonstrator system, which facilitates studying the impact of various influencing factors and system processes on recognition performance. By doing so, it can visually explained how the decisions are made by the face verification pipeline for specific instances in our test set using heatmaps and locally interpretable features. Furthermore, we offer a comprehensive explanation of the end-to-end model by examining the relationship between verification failures and misclassifications of soft biometric facial traits. Based on these results and its repercussions, an extensive legal analysis is provided.
- Universität Wien - 100%
- Martin Winter, Joanneum Research , national collaboration partner
- Jean-Luc Dugelay, Institut Eurécom - France
- Fernando Pereira, Universitario de Santiago - Portugal
- Touradj Ebrahimi, École polytechnique fédérale de Lausanne - Switzerland
Research Output
- 3 Publications
-
2023
Title Using Open-Source Image Datasets for Research DOI 10.38023/176ba478-dba7-49a1-aa5d-cc54a04fb105 Type Journal Article Author Pfister J Journal Jusletter-IT -
2024
Title XAIface: A Framework and Toolkit for Explainable Face Recognition DOI 10.1109/cbmi62980.2024.10859212 Type Conference Proceeding Abstract Author Mirabet-Herranz N Pages 1-7 -
2024
Title KI und Recht. Zeig mir dein Gesicht. Rechtsfragen um die Gesichteserkennung Type Other Author Pfister J Link Publication