In this thesis, we focus on eXplainable Artificial Intelligence (XAI) from an interdisciplinary perspective, and with a strong foundation in computer science. We contribute to the field along two main lines: first, we discuss technical and legal requirements of explanations, with a focus on the European legal framework. To this end, we provide a theoretical discussion, propose a set of technical and legal desiderata, and develop a layered mapping between them. Ultimately, this mapping has to be interpreted systematically, understood as dynamic, circular, and iterative. Additionally, we discuss an expert focus study, designed to understand expectations of legal scholars towards XAI methods and against the General Data Protection Regulation (GDPR). The study builds on a credit case and draws on social science methodology. We find that XAI methods are hard to understand and lack some information. Further, none of the presented explanations complies with the GDPR. We present recommendations arising from the study for practitioners of XAI, and pointers to legal topics that should be considered in-depth.The second main contribution of this thesis is the declarative explanation tool REASONX (reason to explain). Building on constraint logic programming (CLP), REASONX can integrate background knowledge in the form of linear constraints, is interactive, provides explanations under under-specified information, and can compute explanations over different time points and different models. REASONX works on any type of ML model, for classification tasks, and on tabular data. It provides explanations as decision rules and contrastive examples, based on an algebra of operators over theories of constraints. While the core of REASONX is implemented in CLP, we also provide a Python layer for the integration of data and models. REASONX is a tool that not only offers several practical functionalities but puts its user in focus, as it is the user that queries the tool and adapts these queries as needed.Complementary to the main lines, we present an evaluation of XAI methods on a use case: the prediction of the electrification rate from mobile phone data in Senegal. Evaluations of XAI methods on real-world cases, despite urgently needed to verify methods, remain scarce. This makes this work an important contribution itself. Further, it emphasizes again the importance of integrating background knowledge into explanations, establishing thereby a direct connection to REASONX.

Explainability requirements and a declarative explanation tool: an integrated approach

STATE, Laura
2024

Abstract

In this thesis, we focus on eXplainable Artificial Intelligence (XAI) from an interdisciplinary perspective, and with a strong foundation in computer science. We contribute to the field along two main lines: first, we discuss technical and legal requirements of explanations, with a focus on the European legal framework. To this end, we provide a theoretical discussion, propose a set of technical and legal desiderata, and develop a layered mapping between them. Ultimately, this mapping has to be interpreted systematically, understood as dynamic, circular, and iterative. Additionally, we discuss an expert focus study, designed to understand expectations of legal scholars towards XAI methods and against the General Data Protection Regulation (GDPR). The study builds on a credit case and draws on social science methodology. We find that XAI methods are hard to understand and lack some information. Further, none of the presented explanations complies with the GDPR. We present recommendations arising from the study for practitioners of XAI, and pointers to legal topics that should be considered in-depth.The second main contribution of this thesis is the declarative explanation tool REASONX (reason to explain). Building on constraint logic programming (CLP), REASONX can integrate background knowledge in the form of linear constraints, is interactive, provides explanations under under-specified information, and can compute explanations over different time points and different models. REASONX works on any type of ML model, for classification tasks, and on tabular data. It provides explanations as decision rules and contrastive examples, based on an algebra of operators over theories of constraints. While the core of REASONX is implemented in CLP, we also provide a Python layer for the integration of data and models. REASONX is a tool that not only offers several practical functionalities but puts its user in focus, as it is the user that queries the tool and adapts these queries as needed.Complementary to the main lines, we present an evaluation of XAI methods on a use case: the prediction of the electrification rate from mobile phone data in Senegal. Evaluations of XAI methods on real-world cases, despite urgently needed to verify methods, remain scarce. This makes this work an important contribution itself. Further, it emphasizes again the importance of integrating background knowledge into explanations, establishing thereby a direct connection to REASONX.
24-ott-2024
Inglese
Scuola Normale Superiore
Esperti anonimi
File in questo prodotto:
File Dimensione Formato  
Tesi.pdf

embargo fino al 25/10/2025

Licenza: Tutti i diritti riservati
Dimensione 4.49 MB
Formato Adobe PDF
4.49 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/304297
Il codice NBN di questa tesi è URN:NBN:IT:SNS-304297