Sequential data is integral to many fields and plays a fundamental role in high-stakes decision-making, such as in healthcare, finance, transportation, and many other domains. However, the state-of-the-art approaches for sequential data predictions are usually black-box models, hardly interpretable from a human standpoint. In critical domains, the ability to explain a model's decisions is vital to establish a trustworthy relationship between human experts and AI systems. Thus, effective eXplainable AI (XAI) methods for sequential data can provide deeper insights across various domains, enhancing trust in machine learning decisions and reinforcing expert accountability in decision-making processes. This work tackles the challenge of explaining sequential data models from three distinct angles: the input, the output, and the explanation. Specifically, the input focuses on the diverse kinds of sequential data, proposing a comprehensive definition that encompasses forms like time series, trajectories, and text. The output pertains to the target variable in supervised learning, which can be either categorical or continuous, as seen in classification and regression tasks. Lastly, this work focuses on the explanation, which is the core of XAI, presenting various visualization techniques to aid users in understanding predictions from sequential data models. We analyze different combinations of input, output, and explanation types, proposing solutions tailored to the unique requirements of each task and challenge.

Explanation Methods for Sequential Data Models

SPINNATO, Francesco
2024

Abstract

Sequential data is integral to many fields and plays a fundamental role in high-stakes decision-making, such as in healthcare, finance, transportation, and many other domains. However, the state-of-the-art approaches for sequential data predictions are usually black-box models, hardly interpretable from a human standpoint. In critical domains, the ability to explain a model's decisions is vital to establish a trustworthy relationship between human experts and AI systems. Thus, effective eXplainable AI (XAI) methods for sequential data can provide deeper insights across various domains, enhancing trust in machine learning decisions and reinforcing expert accountability in decision-making processes. This work tackles the challenge of explaining sequential data models from three distinct angles: the input, the output, and the explanation. Specifically, the input focuses on the diverse kinds of sequential data, proposing a comprehensive definition that encompasses forms like time series, trajectories, and text. The output pertains to the target variable in supervised learning, which can be either categorical or continuous, as seen in classification and regression tasks. Lastly, this work focuses on the explanation, which is the core of XAI, presenting various visualization techniques to aid users in understanding predictions from sequential data models. We analyze different combinations of input, output, and explanation types, proposing solutions tailored to the unique requirements of each task and challenge.
9-mag-2024
Inglese
Scuola Normale Superiore
Esperti anonimi
File in questo prodotto:
File Dimensione Formato  
Tesi.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 11.15 MB
Formato Adobe PDF
11.15 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/304287
Il codice NBN di questa tesi è URN:NBN:IT:SNS-304287