Trust is the foundational currency of finance, yet its absence remains a major obstacle to the adoption of artificial intelligence across the sector. As machine learning and large language models increasingly underpin decision making, from credit scoring and risk assessment to regulatory reporting, financial institutions face the challenge of deploying systems that are not only accurate but also explainable, auditable, and reproducible. This thesis addresses the dual scientific and ethical problem of designing AI that is transparent and reliable under high-stakes operational conditions. The proposed research develops a structured framework, termed the transparency stack, composed of three complementary pillars: Explainable AI (XAI), Grounded Generation, and Mechanistic Interpretability. XAI establishes quantitative metrics, such as Effective Compactness, Rank Quality Index, and Stability, to evaluate explanation faithfulness and reproducibility across financial models. Grounded Generation extends output transparency through verifiable generative reasoning, integrating document retrieval to ground LLM responses in verifiable sources and reduce hallucination. Mechanistic Interpretability advances internal transparency by probing the computational circuits of transformer architectures, revealing how reasoning-like behaviors emerge within model parameters. Together, these pillars define a methodological continuum linking interpretability, verifiability, and mechanistic understanding. The thesis unites transparency and reliability as interdependent constructs: transparency enables epistemic access to AI reasoning and evidence, while reliability ensures these properties remain stable across contexts and perturbations. By bridging empirical evaluation, grounded generation, and mechanistic analysis, this work advances the development of Transparent and Reliable AI, a paradigm for financial systems that meet regulatory, ethical, and scientific standards of trust.
La fiducia è la valuta fondamentale della finanza, tuttavia la sua assenza rimane un ostacolo importante all’adozione dell’intelligenza artificiale nel settore. Poiché l’apprendimento automatico e i modelli linguistici di grandi dimensioni sostengono sempre più i processi decisionali, dal credit scoring e dalla valutazione del rischio alla rendicontazione normativa, le istituzioni finanziarie devono affrontare la sfida di implementare sistemi che siano non solo accurati, ma anche spiegabili, verificabili e riproducibili. Questa tesi affronta il duplice problema, scientifico ed etico, di progettare un’IA trasparente e affidabile in condizioni operative ad alto rischio. La ricerca proposta sviluppa un quadro strutturato, denominato transparency stack, composto da tre pilastri complementari: Explainable AI (XAI), Grounded Generation e Mechanistic Interpretability. XAI definisce metriche quantitative, come Effective Compactness, Rank Quality Index e Stability, per valutare la fedeltà e la riproducibilità delle spiegazioni nei modelli finanziari. Grounded Generation estende la trasparenza degli output attraverso un ragionamento generativo verificabile, integrando il recupero di documenti per fondare le risposte dei modelli linguistici su fonti verificabili e ridurre le allucinazioni. Mechanistic Interpretability avanza la trasparenza interna analizzando i circuiti computazionali delle architetture a trasformatori, rivelando come comportamenti simili al ragionamento emergano all’interno dei parametri del modello. Insieme, questi pilastri definiscono un continuum metodologico che collega interpretabilità, verificabilità e comprensione meccanicistica. La tesi unisce trasparenza e affidabilità come costrutti interdipendenti: la trasparenza consente l’accesso epistemico al ragionamento e alle evidenze dell’IA, mentre l’affidabilità assicura che tali proprietà rimangano stabili attraverso contesti e perturbazioni diverse. Collegando valutazione empirica, generazione fondata e analisi meccanicistica, questo lavoro promuove lo sviluppo di un’IA Trasparente e Affidabile, un paradigma per sistemi finanziari che soddisfano gli standard regolamentari, etici e scientifici della fiducia.
Transparent and Reliable AI for the Financial Domain
MIOLA, ARIANNA
2026
Abstract
Trust is the foundational currency of finance, yet its absence remains a major obstacle to the adoption of artificial intelligence across the sector. As machine learning and large language models increasingly underpin decision making, from credit scoring and risk assessment to regulatory reporting, financial institutions face the challenge of deploying systems that are not only accurate but also explainable, auditable, and reproducible. This thesis addresses the dual scientific and ethical problem of designing AI that is transparent and reliable under high-stakes operational conditions. The proposed research develops a structured framework, termed the transparency stack, composed of three complementary pillars: Explainable AI (XAI), Grounded Generation, and Mechanistic Interpretability. XAI establishes quantitative metrics, such as Effective Compactness, Rank Quality Index, and Stability, to evaluate explanation faithfulness and reproducibility across financial models. Grounded Generation extends output transparency through verifiable generative reasoning, integrating document retrieval to ground LLM responses in verifiable sources and reduce hallucination. Mechanistic Interpretability advances internal transparency by probing the computational circuits of transformer architectures, revealing how reasoning-like behaviors emerge within model parameters. Together, these pillars define a methodological continuum linking interpretability, verifiability, and mechanistic understanding. The thesis unites transparency and reliability as interdependent constructs: transparency enables epistemic access to AI reasoning and evidence, while reliability ensures these properties remain stable across contexts and perturbations. By bridging empirical evaluation, grounded generation, and mechanistic analysis, this work advances the development of Transparent and Reliable AI, a paradigm for financial systems that meet regulatory, ethical, and scientific standards of trust.| File | Dimensione | Formato | |
|---|---|---|---|
|
phd_unimib_883921.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
3.65 MB
Formato
Adobe PDF
|
3.65 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/360651
URN:NBN:IT:UNIMIB-360651