Amid increasing societal reliance on artificial intelligence (AI), this thesis begins by establishing a foundational understanding of AI systems, their capabilities, applications, and social implications, before turning to its central focus: the interplay between the goals of debiasing AI systems and safeguarding privacy. While the development and use of AI systems offer significant benefits, they also present serious risks, including data-driven bias that reinforces social inequalities and improper data handling that compromises privacy. Addressing these dual challenges is central to the European Union’s (EU) framework for responsible AI governance. Against this backdrop, the study critically engages with the technical and ethical dimensions of how these pressing concerns intersect. It then examines two key regulatory pillars—the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA)—recognizing their foundational roles while also exposing regulatory blind spots that warrant greater attention. The analysis highlights the significance of proper data strategies, suggesting the critical role of proportionality, contextual, and evidence-based approaches. The central argument of this thesis is that the EU’s responsible AI governance framework should be rooted in transparent and accountable decision-making, where technical, ethical, and regulatory strategies are coherently aligned. In the quest to balance bias mitigation and privacy protection, the thesis proposes dynamic transparency as both a bridging principle and an internal governance approach. When initiatives are driven by strong internal commitment and reinforced by externally coordinated efforts among relevant actors through diverse transparency-oriented mechanisms, the groundwork is laid for AI systems that are not only innovative but also developed and deployed responsibly.

RESPONSIBLE AI GOVERNANCE: BALANCING THE GOALS OF BIAS MITIGATION AND PRIVACY PROTECTION THROUGH TRANSPARENCY

NAVDARASHVILI, TAMAR
2026

Abstract

Amid increasing societal reliance on artificial intelligence (AI), this thesis begins by establishing a foundational understanding of AI systems, their capabilities, applications, and social implications, before turning to its central focus: the interplay between the goals of debiasing AI systems and safeguarding privacy. While the development and use of AI systems offer significant benefits, they also present serious risks, including data-driven bias that reinforces social inequalities and improper data handling that compromises privacy. Addressing these dual challenges is central to the European Union’s (EU) framework for responsible AI governance. Against this backdrop, the study critically engages with the technical and ethical dimensions of how these pressing concerns intersect. It then examines two key regulatory pillars—the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA)—recognizing their foundational roles while also exposing regulatory blind spots that warrant greater attention. The analysis highlights the significance of proper data strategies, suggesting the critical role of proportionality, contextual, and evidence-based approaches. The central argument of this thesis is that the EU’s responsible AI governance framework should be rooted in transparent and accountable decision-making, where technical, ethical, and regulatory strategies are coherently aligned. In the quest to balance bias mitigation and privacy protection, the thesis proposes dynamic transparency as both a bridging principle and an internal governance approach. When initiatives are driven by strong internal commitment and reinforced by externally coordinated efforts among relevant actors through diverse transparency-oriented mechanisms, the groundwork is laid for AI systems that are not only innovative but also developed and deployed responsibly.
9-mar-2026
Inglese
COMINELLI, LUIGI
GALETTA, DIANA URANIA
Università degli Studi di Milano
161
File in questo prodotto:
File Dimensione Formato  
phd_unimi_R14003.pdf

embargo fino al 24/08/2027

Licenza: Creative Commons
Dimensione 2.02 MB
Formato Adobe PDF
2.02 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/360488
Il codice NBN di questa tesi è URN:NBN:IT:UNIMI-360488