Trustworthy adoption of AI systems within the decision-making pipeline across public authorities (Automated Governance) is a tall order, as public authorities must navigate not only the nuances associated with the adoption of an AI system, which is a disruptive and transformative technology, but also comply with the administrative law principles that are focused on preserving the rule of law and uphold fundamental rights. This thesis investigates the complex challenges facing public authorities across Europe as they increasingly adopt Automated Governance systems, revealing that while these systems may promise enhanced efficiency and reduced operational costs, significant impediments plague their sustainable and scalable implementation. These impediments include a lack of meaningful transparency, a lack of meaningful human control, inability to ensure adequate human oversight and a vacuum of participatory citizen-centric design focused on ensuring meaningful interaction between the Automated Governance systems and decision subjects. This thesis further identifies that public authorities are encouraged to chase ‘first-adopter’ status when it comes to embracing Automated Governance systems and have subsequently developed a tunnel vision that is focused primarily on technical elements such as AI algorithms, datasets, technical infrastructure, while neglecting crucial non-technical factors such as workforce competencies, AI Literacy, process pipelines and organisational structures. This has consequently led to the deterioration of fundamental rights, erosion of administrative discretion, and loss of public trust. Moving beyond the dominant approaches of techno-optimism (viewing Automated Governance as a panacea to administrative inefficiency) and techno-pessimism (advocating against the use of Automated Governance systems by critically analysing the adverse impacts associated with their adoption), this study advocates for a techno-pragmatic approach grounded in the theory of sociotechnical systems (STS), which recognises three interconnected elements within institutions, namely (1) technological elements (AI algorithms, data, and infrastructure), (2) organisational elements (organisational structures, AI governance policies and risk-mitigation mechanisms), and (3) social elements (AI literacy levels, human autonomy, and behavioural factors). The research study undertakes a mixed-methods approach combining comprehensive desk research and use- case analysis of five (5) prominent instances of AI failures across public authorities, i.e. (i) Dutch Taxation Authority's Systeem Risico Indicatie (SyRI), (ii) Trelleborg Municipality's welfare assessment system, (iii) the UK Post Office's Horizon Software, (iv) the UK Department for Work and Pensions' Fraud Risk Model, and (v) the UK Home Office's visa screening system based on seven (7) key assessment requirements derived from the combined reading of the EU HLEG Ethics Guidelines and the EU AI Act. Subsequently, this research study identifies four (4) core issues plaguing the adoption of Automated Governance systems, namely, (1) broad motivations with inadequate planning, (2) inadequate internal AI governance mechanisms, (3) insufficient meaningful transparency, and (4) systematic exclusion of decision-subjects from the development and deployment of Automated Governance. The study examines the EU AI Act as the primary regulatory framework, finding that while it addresses many concerns and embodies inherent sociotechnical parity across its provisions, it primarily focuses on high-risk AI systems, leaving gaps in regulatory coverage for non-high-risk AI systems that may be adopted by public authorities and may pose risks to health, safety and fundamental rights despite their low risk-classification. To address these challenges, the thesis proposes a comprehensive sociotechnical framework encompassing three (3) core building blocks, namely, (i) meaningful transparency, (ii) algorithmic accountability, and (iii) human oversight and recommends institutional best practices, including treating AI adoption as a comprehensive sociotechnical endeavour, establishing participatory and citizen-centric AI systems, and bridging accountability gaps through enforced meaningful transparency requirements. The research contributes to AI governance knowledge by providing empirical evidence of AI failures, demonstrating practical application of sociotechnical systems theory, and offering actionable recommendations for public authorities seeking safe and trustworthy AI adoption while identifying regulatory gaps and proposing institutional safeguards that go beyond current provisions of the EU AI Act, ultimately arguing that successful AI adoption across public authorities requires moving toward a pragmatic, sociotechnically cognisant approach that creates a synergetic human-AI partnership while actively preserving democratic principles and fundamental rights.

The Post Facto Investigation Of Automated Governance Projects: Revealing The Value Of A Sociotechnical Approach

GAUR, MITISHA
2026

Abstract

Trustworthy adoption of AI systems within the decision-making pipeline across public authorities (Automated Governance) is a tall order, as public authorities must navigate not only the nuances associated with the adoption of an AI system, which is a disruptive and transformative technology, but also comply with the administrative law principles that are focused on preserving the rule of law and uphold fundamental rights. This thesis investigates the complex challenges facing public authorities across Europe as they increasingly adopt Automated Governance systems, revealing that while these systems may promise enhanced efficiency and reduced operational costs, significant impediments plague their sustainable and scalable implementation. These impediments include a lack of meaningful transparency, a lack of meaningful human control, inability to ensure adequate human oversight and a vacuum of participatory citizen-centric design focused on ensuring meaningful interaction between the Automated Governance systems and decision subjects. This thesis further identifies that public authorities are encouraged to chase ‘first-adopter’ status when it comes to embracing Automated Governance systems and have subsequently developed a tunnel vision that is focused primarily on technical elements such as AI algorithms, datasets, technical infrastructure, while neglecting crucial non-technical factors such as workforce competencies, AI Literacy, process pipelines and organisational structures. This has consequently led to the deterioration of fundamental rights, erosion of administrative discretion, and loss of public trust. Moving beyond the dominant approaches of techno-optimism (viewing Automated Governance as a panacea to administrative inefficiency) and techno-pessimism (advocating against the use of Automated Governance systems by critically analysing the adverse impacts associated with their adoption), this study advocates for a techno-pragmatic approach grounded in the theory of sociotechnical systems (STS), which recognises three interconnected elements within institutions, namely (1) technological elements (AI algorithms, data, and infrastructure), (2) organisational elements (organisational structures, AI governance policies and risk-mitigation mechanisms), and (3) social elements (AI literacy levels, human autonomy, and behavioural factors). The research study undertakes a mixed-methods approach combining comprehensive desk research and use- case analysis of five (5) prominent instances of AI failures across public authorities, i.e. (i) Dutch Taxation Authority's Systeem Risico Indicatie (SyRI), (ii) Trelleborg Municipality's welfare assessment system, (iii) the UK Post Office's Horizon Software, (iv) the UK Department for Work and Pensions' Fraud Risk Model, and (v) the UK Home Office's visa screening system based on seven (7) key assessment requirements derived from the combined reading of the EU HLEG Ethics Guidelines and the EU AI Act. Subsequently, this research study identifies four (4) core issues plaguing the adoption of Automated Governance systems, namely, (1) broad motivations with inadequate planning, (2) inadequate internal AI governance mechanisms, (3) insufficient meaningful transparency, and (4) systematic exclusion of decision-subjects from the development and deployment of Automated Governance. The study examines the EU AI Act as the primary regulatory framework, finding that while it addresses many concerns and embodies inherent sociotechnical parity across its provisions, it primarily focuses on high-risk AI systems, leaving gaps in regulatory coverage for non-high-risk AI systems that may be adopted by public authorities and may pose risks to health, safety and fundamental rights despite their low risk-classification. To address these challenges, the thesis proposes a comprehensive sociotechnical framework encompassing three (3) core building blocks, namely, (i) meaningful transparency, (ii) algorithmic accountability, and (iii) human oversight and recommends institutional best practices, including treating AI adoption as a comprehensive sociotechnical endeavour, establishing participatory and citizen-centric AI systems, and bridging accountability gaps through enforced meaningful transparency requirements. The research contributes to AI governance knowledge by providing empirical evidence of AI failures, demonstrating practical application of sociotechnical systems theory, and offering actionable recommendations for public authorities seeking safe and trustworthy AI adoption while identifying regulatory gaps and proposing institutional safeguards that go beyond current provisions of the EU AI Act, ultimately arguing that successful AI adoption across public authorities requires moving toward a pragmatic, sociotechnically cognisant approach that creates a synergetic human-AI partnership while actively preserving democratic principles and fundamental rights.
7-gen-2026
Inglese
Automated Decision Making
Sociotechnical Systems
Artificial Intelligence
Meaningful Transparency
Human Oversight
Algorithmic Accountablity
Public Administration
Fundamental Rights
Data Governance
Techno-Optimism
Techno-Pragmatism
Techno- Pessimism
AI Act
AI Regulation
AI Governance.
Comandè, Giovanni
Rinzivillo, Salvatore
File in questo prodotto:
File Dimensione Formato  
MitishaGaur.PhDThesis.pdf

embargo fino al 09/01/2029

Licenza: Tutti i diritti riservati
Dimensione 4.85 MB
Formato Adobe PDF
4.85 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/356241
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-356241