The increasing dependence on data-driven systems in high-stakes decision-making domains, such as lending, employment, and legal proceedings, has raised significant concerns about fairness, transparency, and accountability. High-profile cases, such as the racially biased COMPAS algorithm, highlight the potential societal risks of these technologies. A critical challenge is the lack of accessible mechanisms for individuals to contest opaque and often inaccessible automated decisions. This dissertation addresses the emerging concept of algorithmic contestability, which enables individuals to challenge, appeal, or seek explanations for AI-driven decisions. Indeed, while legal frameworks, such as the EU AI Act, conceptualize contestability, they fail to describe technically or scientifically actionable methodologies for effective implementation. In machine learning, algorithmic recourse describes techniques that provide actionable explanations to help individuals understand and potentially overturn adverse decisions. Counterfactual explanations (CEs) are central to this concept since they identify changes a user could make to achieve a favourable outcome. For instance, a CE might suggest that a loan application would have been approved if the applicant’s salary were 10% higher. While CEs resonate with human reasoning and offer practical benefits, existing methods often neglect user preferences, personal constraints, and the dynamic nature of real-world environments, limiting their effectiveness and accessibility. This dissertation advances the field by introducing a human-centric framework for algorithmic contestability that integrates individuals into the decision-contestation process. It addresses four key research areas by providing the following contributions: (i) a novel reinforcement learning-based approach to generate personalized counterfactuals while minimizing computational effort accompanied by a rule-based surrogate model that provides detailed explanations without sacrificing efficiency. (ii) a new algorithm inspired by Bayesian preference elicitation enabling real-time elicitation of user preferences to tailor counterfactual explanations interactively, representing the first human-in-the-loop framework for algorithmic recourse. (iii) a guided web interface enhancing user engagement and simplifying the process of obtaining recourse, validated through comparative user studies of guided and exploratory interaction modalities. (iv) A causal temporal framework to address the impact of user dynamics and environmental changes on algorithmic recourse validity, underlying the fundamental challenges and enabling the study of long-term feasibility and effectiveness. By advancing personalized, interactive, and robust methods for algorithmic recourse, this dissertation outlines a potential pathway to operationalize algorithmic contestability. These contributions aim to enhance user agency, foster trust, and promote equitable and transparent AI-driven systems.
Personalized Algorithmic Recourse: Towards a human-centric approach for algorithmic contestability
De Toni, Giovanni
2025
Abstract
The increasing dependence on data-driven systems in high-stakes decision-making domains, such as lending, employment, and legal proceedings, has raised significant concerns about fairness, transparency, and accountability. High-profile cases, such as the racially biased COMPAS algorithm, highlight the potential societal risks of these technologies. A critical challenge is the lack of accessible mechanisms for individuals to contest opaque and often inaccessible automated decisions. This dissertation addresses the emerging concept of algorithmic contestability, which enables individuals to challenge, appeal, or seek explanations for AI-driven decisions. Indeed, while legal frameworks, such as the EU AI Act, conceptualize contestability, they fail to describe technically or scientifically actionable methodologies for effective implementation. In machine learning, algorithmic recourse describes techniques that provide actionable explanations to help individuals understand and potentially overturn adverse decisions. Counterfactual explanations (CEs) are central to this concept since they identify changes a user could make to achieve a favourable outcome. For instance, a CE might suggest that a loan application would have been approved if the applicant’s salary were 10% higher. While CEs resonate with human reasoning and offer practical benefits, existing methods often neglect user preferences, personal constraints, and the dynamic nature of real-world environments, limiting their effectiveness and accessibility. This dissertation advances the field by introducing a human-centric framework for algorithmic contestability that integrates individuals into the decision-contestation process. It addresses four key research areas by providing the following contributions: (i) a novel reinforcement learning-based approach to generate personalized counterfactuals while minimizing computational effort accompanied by a rule-based surrogate model that provides detailed explanations without sacrificing efficiency. (ii) a new algorithm inspired by Bayesian preference elicitation enabling real-time elicitation of user preferences to tailor counterfactual explanations interactively, representing the first human-in-the-loop framework for algorithmic recourse. (iii) a guided web interface enhancing user engagement and simplifying the process of obtaining recourse, validated through comparative user studies of guided and exploratory interaction modalities. (iv) A causal temporal framework to address the impact of user dynamics and environmental changes on algorithmic recourse validity, underlying the fundamental challenges and enabling the study of long-term feasibility and effectiveness. By advancing personalized, interactive, and robust methods for algorithmic recourse, this dissertation outlines a potential pathway to operationalize algorithmic contestability. These contributions aim to enhance user agency, foster trust, and promote equitable and transparent AI-driven systems.File | Dimensione | Formato | |
---|---|---|---|
phd_unitn_De Toni_Giovanni.pdf
embargo fino al 17/04/2026
Dimensione
11.21 MB
Formato
Adobe PDF
|
11.21 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/208588
URN:NBN:IT:UNITN-208588