This thesis examines the implementation of the European Union's Artificial Intelligence Act (AI Act) through an institutional governance perspective, focusing on the emerging fragmentation in national implementation approaches. The research employs Gasser and Almeida's layered governance model as its primary theoretical framework, conceptualizing AI governance as comprising interconnected technical, ethical, and social/legal dimensions that require coordinated oversight mechanisms. This framework is complemented by contemporary AI governance literature, institutional theory, and regulatory governance scholarship to analyze the complex institutional dynamics of multilevel AI governance. The study utilizes a multidimensional methodological approach combining comparative institutional analysis, document analysis, and theoretical triangulation to examine implementation patterns across EU. Through systematic examination of implementation decisions in Member States, the research identifies significant divergence in institutional arrangements established under Article 28 and Article 70 of the AI Act, which mandate the designation of notifying authorities and market surveillance authorities respectively. To address this fragmentation, the thesis proposes a comprehensive governance reform centered on two complementary institutional innovations. First, the establishment of integrated National AI Offices that would serve as unified authorities combining market surveillance and notifying authority functions while maintaining necessary internal separations. These offices would establish formal coordination protocols with sectoral regulators and serve as single points of contact for stakeholders. Second, an enhanced AI Board with a configuration model inspired by the Council of the EU, featuring structured representation from National AI Offices, data protection authorities, and relevant sectoral regulators depending on subject matter. This three-pronged representation tries to ensure comprehensive coverage across governance layers, while balancing consistency with necessary contextual adaptation. The proposed model is empirically validated through structured case studies across three distinct regulatory domains governed by the AI Act: prohibited practices, regulatory sandboxes, and high-risk AI systems. This validation aims at demonstrating how the reformed governance architecture would address specific implementation challenges while enhancing both vertical integration within Member States and horizontal coordination across the European regulatory landscape. Despite acknowledged limitations regarding the emergent nature of AI governance and limited empirical evidence of long-term implementation effects, this research may offer significant contributions to both scholarly understanding and practical governance implementation. The findings show fundamental tensions in institutional design for AI domain and provide empirically grounded recommendations for enhancing regulatory coherence. By examining the interplay between institutional structures and governance effectiveness, the study contributes to the theoretical understanding of how administrative architectures shape regulatory outcomes, while providing actionable insights for policymakers navigating the complex implementation landscape of the AI Act.

A layered and modular architecture for EU artificial intelligence governance: harmonizing institutional design under the AI Act

PARISINI, EMANUELE
2026

Abstract

This thesis examines the implementation of the European Union's Artificial Intelligence Act (AI Act) through an institutional governance perspective, focusing on the emerging fragmentation in national implementation approaches. The research employs Gasser and Almeida's layered governance model as its primary theoretical framework, conceptualizing AI governance as comprising interconnected technical, ethical, and social/legal dimensions that require coordinated oversight mechanisms. This framework is complemented by contemporary AI governance literature, institutional theory, and regulatory governance scholarship to analyze the complex institutional dynamics of multilevel AI governance. The study utilizes a multidimensional methodological approach combining comparative institutional analysis, document analysis, and theoretical triangulation to examine implementation patterns across EU. Through systematic examination of implementation decisions in Member States, the research identifies significant divergence in institutional arrangements established under Article 28 and Article 70 of the AI Act, which mandate the designation of notifying authorities and market surveillance authorities respectively. To address this fragmentation, the thesis proposes a comprehensive governance reform centered on two complementary institutional innovations. First, the establishment of integrated National AI Offices that would serve as unified authorities combining market surveillance and notifying authority functions while maintaining necessary internal separations. These offices would establish formal coordination protocols with sectoral regulators and serve as single points of contact for stakeholders. Second, an enhanced AI Board with a configuration model inspired by the Council of the EU, featuring structured representation from National AI Offices, data protection authorities, and relevant sectoral regulators depending on subject matter. This three-pronged representation tries to ensure comprehensive coverage across governance layers, while balancing consistency with necessary contextual adaptation. The proposed model is empirically validated through structured case studies across three distinct regulatory domains governed by the AI Act: prohibited practices, regulatory sandboxes, and high-risk AI systems. This validation aims at demonstrating how the reformed governance architecture would address specific implementation challenges while enhancing both vertical integration within Member States and horizontal coordination across the European regulatory landscape. Despite acknowledged limitations regarding the emergent nature of AI governance and limited empirical evidence of long-term implementation effects, this research may offer significant contributions to both scholarly understanding and practical governance implementation. The findings show fundamental tensions in institutional design for AI domain and provide empirically grounded recommendations for enhancing regulatory coherence. By examining the interplay between institutional structures and governance effectiveness, the study contributes to the theoretical understanding of how administrative architectures shape regulatory outcomes, while providing actionable insights for policymakers navigating the complex implementation landscape of the AI Act.
7-gen-2026
Inglese
intelligenza artificiale
regolamento europeo
autorità di sorveglianza del mercato
autorità di notifica
applicazioni ad alto rischio
artificial intelligence
machine learning
AI Act
market surveillance authorities
notifying authorities
high-risk AI systems
sandbox
Pedreschi, Dino
Misuraca, Gianluca Carlo
File in questo prodotto:
File Dimensione Formato  
Tesi_EP.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 2.18 MB
Formato Adobe PDF
2.18 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/356242
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-356242