This thesis aims to provide a personal contribution to the research area of Trustworthy Artificial Intelligence (AI), focusing on some aspects still largely unexplored. The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy. In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately. Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone. The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.

Federated Learning of Explainable Artificial Intelligence models: real world applications and theoretical advances

RUFFINI, FABRIZIO
2026

Abstract

This thesis aims to provide a personal contribution to the research area of Trustworthy Artificial Intelligence (AI), focusing on some aspects still largely unexplored. The objective is to develop systems that achieve high predictive accuracy while creating trust between stakeholders and the AI system. Since trustworthiness is a multidimensional topic, this work focuses on two of its key requirements: transparency and data privacy. In this context, the thesis explore the use of Fed-XAI models (explainable AI models trained through Federated Learning), to simultaneously ensure data privacy and provide explanations. At the time of writing, most existing works address these issues separately. Furthermore, this work provides a comparison between the explanations generated by post-hoc methods applied to opaque models and those derived from interpretable by design models. Such systematic comparison remains largely unexplored in the current literature, predominantly focusing on post-hoc methods alone. The thesis is structured as a three-act play. The first two acts examine the application of different Fed-XAI models to real-world datasets, initially focusing on interpretable-by-design approaches and subsequently comparing them with opaque models. These analyses highlight the limitations of existing strategies and motivate the methodological framework introduced in the third act, which represents the main theoretical contribution of this thesis.
16-feb-2026
Inglese
Federated Learning
Explainability
Trustworthy
AI
Marcelloni, Francesco
Ducange, Pietro
File in questo prodotto:
File Dimensione Formato  
PhDAIThesis_RuffiniXVIII_pdfA.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 9.12 MB
Formato Adobe PDF
9.12 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/359118
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-359118