Model opacity - i.e., the unintelligibility of complex machine learning systems - is a critical challenge in modern AI. This thesis addresses it by developing methods, evaluations, and benchmarks to advance transparent AI development. This work intersects eXplainable AI (XAI) and Natural Language Processing (NLP), with applications in Computer Vision (CV) and medical diagnosis. We first investigate Language Models (LMs) in Hierarchical Text Classification (HTC), a task whose label hierarchy enables more transparent, gradual classification. We develop novel approaches for applying pretrained LMs to HTC and analyze how structural inductive biases can enhance performance. Motivated by the challenge of model understanding, we provide an overview of the XAI landscape, structuring its terminology, categorizing methods, and analyzing evaluation challenges. Based on this groundwork, we develop a task decomposition methodology yielding an intrinsically interpretable CV model and a transparent NLP explainer. Separately, we introduce a novel method to render data-driven Fuzzy Inference Systems linguistically interpretable via automated term labeling. Finally, we address the frontier challenge of evaluating and understanding Large LMs. We introduce a novel literature-based benchmark that tasks models with inferring abstract morals from fables. Our evaluation exposes the brittleness of current models, revealing a reliance on superficial pattern matching over robust reasoning.
La mancanza di trasparenza dei modelli di apprendimento è una sfida cruciale nell'IA moderna. Questa tesi sviluppa metodi, valutazioni e benchmark per l'IA trasparente, intersecando Explainable AI (XAI) e Natural Language Processing (NLP), con applicazioni in Computer Vision (CV) e diagnosi medica. In primo luogo, vengono studiati i Language Models (LM) nel contesto della Hierarchical Text Classification (HTC), dove la gerarchia di etichette permette una classificazione più trasparente e graduale. Qui, sviluppiamo metodi per applicare LM pre-addestrati all'HTC e analizziamo come l'introduzione di bias induttivi strutturali ne impattino le prestazioni. A seguire, forniamo una panoramica della XAI strutturando la terminologia, categorizzando i metodi e analizzando le sfide di valutazione per poi sviluppare una metodologia di scomposizione dei task che si traduce in un modello di CV intrinsecamente interpretabile e un sistema esplicativo NLP trasparente. Segue lo sviluppo un nuovo metodo per rendere i sistemi di inferenza fuzzy data-driven linguisticamente interpretabili tramite l'etichettatura automatizzata della suoi termini. Infine, analizziamo le capacità di ragionamento dei Large Language Models (LLMs) progettando un benchmark innovativo basato su favole dove i modelli devono dedurre morali astratte. La nostra valutazione espone la fragilità dei modelli attuali, rivelando una dipendenza dal pattern matching superficiale piuttosto che da ragionamenti robusti.
Understanding the Behavior and Predictions of Classifiers: Explainable AI in a Radically Changing Field
Marcuzzo, Matteo
2026
Abstract
Model opacity - i.e., the unintelligibility of complex machine learning systems - is a critical challenge in modern AI. This thesis addresses it by developing methods, evaluations, and benchmarks to advance transparent AI development. This work intersects eXplainable AI (XAI) and Natural Language Processing (NLP), with applications in Computer Vision (CV) and medical diagnosis. We first investigate Language Models (LMs) in Hierarchical Text Classification (HTC), a task whose label hierarchy enables more transparent, gradual classification. We develop novel approaches for applying pretrained LMs to HTC and analyze how structural inductive biases can enhance performance. Motivated by the challenge of model understanding, we provide an overview of the XAI landscape, structuring its terminology, categorizing methods, and analyzing evaluation challenges. Based on this groundwork, we develop a task decomposition methodology yielding an intrinsically interpretable CV model and a transparent NLP explainer. Separately, we introduce a novel method to render data-driven Fuzzy Inference Systems linguistically interpretable via automated term labeling. Finally, we address the frontier challenge of evaluating and understanding Large LMs. We introduce a novel literature-based benchmark that tasks models with inferring abstract morals from fables. Our evaluation exposes the brittleness of current models, revealing a reliance on superficial pattern matching over robust reasoning.| File | Dimensione | Formato | |
|---|---|---|---|
|
Marcuzzo_Matteo_PhD_Thesis.pdf
embargo fino al 20/03/2027
Licenza:
Tutti i diritti riservati
Dimensione
10.11 MB
Formato
Adobe PDF
|
10.11 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/361794
URN:NBN:IT:UNIVE-361794