This Thesis explores the challenges of implementing artificial intelligence (AI) systems in judicial decision-making while preserving human accountability and protecting the fundamental right to an effective judicial remedy. Through systematic analysis of theoretical foundations of accountability, regulatory frameworks, and socio-technical approaches, this research develops a comprehensive multilayered accountability framework for human-centric and trustworthy AI in high-stakes contexts. Using the COMPAS case as an example, the research examines how AI’s defining characteristics, namely opacity, complexity, and autonomy, not only disrupt traditional liability regimes, but also create accountability gaps in judicial processes, undermining human agency and judicial responsibility. The research establishes that effective AI governance requires understanding accountability both as a virtue aligned with moral responsibility and as a practical mechanism implemented through legal frameworks. This Thesis makes several significant contributions to the field. First, it develops a theoretical framework distinguishing between responsibility, accountability, and liability for AI systems, emphasizing that meaningful human oversight must be maintained to protect the right to an effective judicial remedy. Second, it critically analyzes the EU’s regulatory approach to AI, revealing tensions between product safety approaches and fundamental rights protection in the AI Act. The research identifies how the concept of ‘material influence’ serves as a crucial threshold determining risk classification, and questions whether harmonized technical standards adequately address fundamental rights concerns. Third, it offers a socio-technical perspective on generative Large Language Models in legal contexts, demonstrating how anthropomorphic features and ‘hallucinations’ increase automation bias in decision-making. It reveals limitations in both technical solutions and regulatory approaches that mandate human oversight without addressing cognitive limitations. Finally, this research culminates with the development of two distinct yet connected accountability frameworks for high-risk and non-high-risk AI systems that distribute obligations across the AI value chain from model providers to system developers and deployers. These frameworks are validated through experimental human-LLM collaborative annotation processes that demonstrate how accountability can be preserved in emerging AI applications. This Thesis concludes that effective governance of AI in judicial contexts requires multilayered accountability that addresses both ex ante risk management and ex post contestability. Through this approach, the research makes a significant contribution to ensuring technological innovation serves rather than undermines the administration of justice in compliance with the rule of law, offering insights relevant to all high-stakes public sector decision-making processes augmented by AI.
Accountability Framework for Trustworthy and Human-Centric AI: A Socio-Technical Approach to Human Oversight of AI Systems for Judicial Decision-Making
CARNAT, IRINA
2025
Abstract
This Thesis explores the challenges of implementing artificial intelligence (AI) systems in judicial decision-making while preserving human accountability and protecting the fundamental right to an effective judicial remedy. Through systematic analysis of theoretical foundations of accountability, regulatory frameworks, and socio-technical approaches, this research develops a comprehensive multilayered accountability framework for human-centric and trustworthy AI in high-stakes contexts. Using the COMPAS case as an example, the research examines how AI’s defining characteristics, namely opacity, complexity, and autonomy, not only disrupt traditional liability regimes, but also create accountability gaps in judicial processes, undermining human agency and judicial responsibility. The research establishes that effective AI governance requires understanding accountability both as a virtue aligned with moral responsibility and as a practical mechanism implemented through legal frameworks. This Thesis makes several significant contributions to the field. First, it develops a theoretical framework distinguishing between responsibility, accountability, and liability for AI systems, emphasizing that meaningful human oversight must be maintained to protect the right to an effective judicial remedy. Second, it critically analyzes the EU’s regulatory approach to AI, revealing tensions between product safety approaches and fundamental rights protection in the AI Act. The research identifies how the concept of ‘material influence’ serves as a crucial threshold determining risk classification, and questions whether harmonized technical standards adequately address fundamental rights concerns. Third, it offers a socio-technical perspective on generative Large Language Models in legal contexts, demonstrating how anthropomorphic features and ‘hallucinations’ increase automation bias in decision-making. It reveals limitations in both technical solutions and regulatory approaches that mandate human oversight without addressing cognitive limitations. Finally, this research culminates with the development of two distinct yet connected accountability frameworks for high-risk and non-high-risk AI systems that distribute obligations across the AI value chain from model providers to system developers and deployers. These frameworks are validated through experimental human-LLM collaborative annotation processes that demonstrate how accountability can be preserved in emerging AI applications. This Thesis concludes that effective governance of AI in judicial contexts requires multilayered accountability that addresses both ex ante risk management and ex post contestability. Through this approach, the research makes a significant contribution to ensuring technological innovation serves rather than undermines the administration of justice in compliance with the rule of law, offering insights relevant to all high-stakes public sector decision-making processes augmented by AI.File | Dimensione | Formato | |
---|---|---|---|
Carnat_PhD_Thesis_AI_pdfA.pdf
embargo fino al 10/07/2028
Dimensione
3.68 MB
Formato
Adobe PDF
|
3.68 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/218723
URN:NBN:IT:UNIPI-218723