As artificial intelligence and machine learning systems become increasingly integrated into critical areas such as healthcare, finance, and public services, the need for trans- parency in their decision-making processes has become more pressing. While much of the research in Explainable AI has focused on improving algorithmic transparency, comparatively little attention has been given to how these explanations are presented to end-users. This thesis addresses this gap by focusing on the design and development of Explanation User Interfaces (XUIs), the critical link between algorithmic insights and human understanding. Building on Defense Advanced Research Projects Agency (DARPA)’s three-part frame- work for eXplainable Artificial Intelligence (XAI), which distinguishes between the ex- plainable model, the user interface, and the psychological needs of the user, this research positions Explanation User Interfaces as essential to making AI systems not just trans- parent, but meaningfully interpretable to users. Central to this work is the recognition that explanation is a design problem as much as it is a technical one; Therefore, ex- planations must be communicated in ways that align with human reasoning, support exploration, and foster understanding. To investigate this challenge, the thesis is structured around four main research con- tributions. First, a systematic literature review maps the current landscape of Explana- tion User interface research, highlighting key design considerations and recurring chal- lenges. This review leads to a set of practical guidelines for designing interfaces that support effective explanation. Second, the thesis introduces trace , an interactive vi- sual interface for rule-based explanations, designed with a strong focus on interactivity and usability. Third, the work explores novel interaction paradigms through tools that allow users to engage with counterfactual explanations and latent space visualisations, encouraging hands-on exploration of AI decision boundaries. Finally, the thesis pro- poses methods for integrating domain knowledge directly into explanation algorithms, making the output more aligned with human understanding and expert reasoning. By combining insights from human-computer interaction, visual analytics, and ex- plainable machine learning, this thesis contributes with a heterogeneous approach to explainability, one that not only improves transparency but also enhances the usability and trustworthiness of AI systems. The findings provide both conceptual and practi- cal tools for advancing the design of explanation interfaces and are intended to support future research and real-world deployment of human-centred Explainable AI systems.
Design and development of explanation User Interface: Interactive visual Dashboards and Design Guidelines for Explainable AI
CAPPUCCIO, ELEONORA
2025
Abstract
As artificial intelligence and machine learning systems become increasingly integrated into critical areas such as healthcare, finance, and public services, the need for trans- parency in their decision-making processes has become more pressing. While much of the research in Explainable AI has focused on improving algorithmic transparency, comparatively little attention has been given to how these explanations are presented to end-users. This thesis addresses this gap by focusing on the design and development of Explanation User Interfaces (XUIs), the critical link between algorithmic insights and human understanding. Building on Defense Advanced Research Projects Agency (DARPA)’s three-part frame- work for eXplainable Artificial Intelligence (XAI), which distinguishes between the ex- plainable model, the user interface, and the psychological needs of the user, this research positions Explanation User Interfaces as essential to making AI systems not just trans- parent, but meaningfully interpretable to users. Central to this work is the recognition that explanation is a design problem as much as it is a technical one; Therefore, ex- planations must be communicated in ways that align with human reasoning, support exploration, and foster understanding. To investigate this challenge, the thesis is structured around four main research con- tributions. First, a systematic literature review maps the current landscape of Explana- tion User interface research, highlighting key design considerations and recurring chal- lenges. This review leads to a set of practical guidelines for designing interfaces that support effective explanation. Second, the thesis introduces trace , an interactive vi- sual interface for rule-based explanations, designed with a strong focus on interactivity and usability. Third, the work explores novel interaction paradigms through tools that allow users to engage with counterfactual explanations and latent space visualisations, encouraging hands-on exploration of AI decision boundaries. Finally, the thesis pro- poses methods for integrating domain knowledge directly into explanation algorithms, making the output more aligned with human understanding and expert reasoning. By combining insights from human-computer interaction, visual analytics, and ex- plainable machine learning, this thesis contributes with a heterogeneous approach to explainability, one that not only improves transparency but also enhances the usability and trustworthiness of AI systems. The findings provide both conceptual and practi- cal tools for advancing the design of explanation interfaces and are intended to support future research and real-world deployment of human-centred Explainable AI systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
PhDAI_thesis_CAPPUCCIO.pdf
embargo fino al 05/12/2028
Licenza:
Creative Commons
Dimensione
15.02 MB
Formato
Adobe PDF
|
15.02 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/354148
URN:NBN:IT:UNIPI-354148