As Machine Learning (ML) systems increasingly impact socially and ethically sensitive domains, there has been a growing emphasis on grounding predictions in causal relationships rather than correlational patterns to ensure more robust, generalizable, and trustworthy decisions. Recent efforts have thus focused on addressing key challenges in ML from a causal perspective, including generating data that faithfully captures the underlying mechanisms of real-world processes and mitigating unfair algorithmic outcomes by determining the root causes of biased decisions. Other directions have explored identifying causal factors to explain model predictions and designing actionable strategies to assist users in changing automated decisions. Lastly, given that many real-world applications involve dynamic systems, attention has been dedicated to handling the complexities of temporal dependencies and delayed effects. This thesis investigates how causal reasoning can be effectively incorporated into various facets of algorithmic decision-making, focusing on two main research directions: its integration into synthetic data generation and its extension over time. For the first direction, we propose a method that leverages causal learning to guide the generation of synthetic datasets. Building on this, we then introduce a framework for enhancing post-hoc explainability, which exploits causal generation to produce more realistic and plausible local neighbourhoods around instances. Finally, to examine how varying bias levels affect algorithmic performance, we design a causality-aware data generator that supports configurable bias settings. For the second direction, we introduce a causal algorithmic recourse strategy that explicitly accounts for the temporal dimension by evaluating the cost of actions with respect to the time needed for their effects to manifest. This perspective is crucial to ensure realistic and context-sensitive recommendations, as some decisions lead to immediate outcomes while others require longer implementation periods. Moreover, we propose a framework for causal inference in dynamical systems, combining principles from structural causal models and autoregressive models, to enable robust estimation of the causal effect of interventions over time. In summary, this thesis contributes to developing robust and realistic algorithmic decision-making processes rooted in causal reasoning, thereby enhancing both their trustworthiness and applicability in high-stakes real-world scenarios.

Causal Reasoning for Robust and Realistic Algorithmic Decision-Making

CINQUINI, MARTINA
2025

Abstract

As Machine Learning (ML) systems increasingly impact socially and ethically sensitive domains, there has been a growing emphasis on grounding predictions in causal relationships rather than correlational patterns to ensure more robust, generalizable, and trustworthy decisions. Recent efforts have thus focused on addressing key challenges in ML from a causal perspective, including generating data that faithfully captures the underlying mechanisms of real-world processes and mitigating unfair algorithmic outcomes by determining the root causes of biased decisions. Other directions have explored identifying causal factors to explain model predictions and designing actionable strategies to assist users in changing automated decisions. Lastly, given that many real-world applications involve dynamic systems, attention has been dedicated to handling the complexities of temporal dependencies and delayed effects. This thesis investigates how causal reasoning can be effectively incorporated into various facets of algorithmic decision-making, focusing on two main research directions: its integration into synthetic data generation and its extension over time. For the first direction, we propose a method that leverages causal learning to guide the generation of synthetic datasets. Building on this, we then introduce a framework for enhancing post-hoc explainability, which exploits causal generation to produce more realistic and plausible local neighbourhoods around instances. Finally, to examine how varying bias levels affect algorithmic performance, we design a causality-aware data generator that supports configurable bias settings. For the second direction, we introduce a causal algorithmic recourse strategy that explicitly accounts for the temporal dimension by evaluating the cost of actions with respect to the time needed for their effects to manifest. This perspective is crucial to ensure realistic and context-sensitive recommendations, as some decisions lead to immediate outcomes while others require longer implementation periods. Moreover, we propose a framework for causal inference in dynamical systems, combining principles from structural causal models and autoregressive models, to enable robust estimation of the causal effect of interventions over time. In summary, this thesis contributes to developing robust and realistic algorithmic decision-making processes rooted in causal reasoning, thereby enhancing both their trustworthiness and applicability in high-stakes real-world scenarios.
7-lug-2025
Inglese
Causality
Fairness
Explainable Artificial Intelligence
Algorithmic Recourse
Dynamical Systems
Guidotti, Riccardo
Ruggieri, Salvatore
File in questo prodotto:
File Dimensione Formato  
PhD_Thesis_Cinquini_Martina.pdf

accesso aperto

Dimensione 11.04 MB
Formato Adobe PDF
11.04 MB Adobe PDF Visualizza/Apri
Report.pdf

non disponibili

Dimensione 65.1 kB
Formato Adobe PDF
65.1 kB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/216702
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-216702