As Machine Learning (ML) systems are increasingly being used in critical domains, the need for a coherent framework to account for the discriminatory effects of their outcomes becomes urgent. In computer science, existing approaches tend to emphasise the role of ML design, presenting algorithmic fairness as a matter of making better design choices, especially at the level of the data used to train the model. This dissertation challenges the adequacy of such a design-centric perspective on the problem of unfair ML predictions. It does so by reconnecting it with broader and longer-standing issues within the philosophy of computational artefacts that have largely been overlooked in the current debate in the ethics of artificial intelligence. The primary contribution of this thesis is to reframe the analysis of algorithmic discrimination around the notions of use, maintenance, and repair of ML systems. Specifically, I argue that the correctness criteria of an ML system should be reformulated in terms of the contextual convergence of the diverse normative requirements of the agents who use it. Compared to other accounts of ML normativity, this reconceptualisation avoids succumbing to scepticism about implementation ascriptions, while returning a more dynamic and realistic understanding of how normative requirements circulate, feed back, conflict, and adapt across complex ML systems. Crucially, I claim that this shift has the advantage of allowing for a richer understanding of algorithmic fairness, viewing it as a plurality of data repair practices, rather than a static value embodied by certain ML designs. Two main formal contributions follow from this analysis: the introduction of validity criteria for ML predictions and the development of a novel logical framework to reason about the impact of errors in the input data on the fairness of algorithmic outcomes.

BIAS AND MISCOMPUTATION. A PHILOSOPHICAL AND FORMAL FRAMEWORK FOR MACHINE LEARNING UNFAIRNESS

MANGANINI, CHIARA
2026

Abstract

As Machine Learning (ML) systems are increasingly being used in critical domains, the need for a coherent framework to account for the discriminatory effects of their outcomes becomes urgent. In computer science, existing approaches tend to emphasise the role of ML design, presenting algorithmic fairness as a matter of making better design choices, especially at the level of the data used to train the model. This dissertation challenges the adequacy of such a design-centric perspective on the problem of unfair ML predictions. It does so by reconnecting it with broader and longer-standing issues within the philosophy of computational artefacts that have largely been overlooked in the current debate in the ethics of artificial intelligence. The primary contribution of this thesis is to reframe the analysis of algorithmic discrimination around the notions of use, maintenance, and repair of ML systems. Specifically, I argue that the correctness criteria of an ML system should be reformulated in terms of the contextual convergence of the diverse normative requirements of the agents who use it. Compared to other accounts of ML normativity, this reconceptualisation avoids succumbing to scepticism about implementation ascriptions, while returning a more dynamic and realistic understanding of how normative requirements circulate, feed back, conflict, and adapt across complex ML systems. Crucially, I claim that this shift has the advantage of allowing for a richer understanding of algorithmic fairness, viewing it as a plurality of data repair practices, rather than a static value embodied by certain ML designs. Two main formal contributions follow from this analysis: the introduction of validity criteria for ML predictions and the development of a novel logical framework to reason about the impact of errors in the input data on the fairness of algorithmic outcomes.
2-apr-2026
Inglese
PRIMIERO, GIUSEPPE
Università degli Studi di Milano
145
File in questo prodotto:
File Dimensione Formato  
phd_unimi_R13840.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 12.19 MB
Formato Adobe PDF
12.19 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/363747
Il codice NBN di questa tesi è URN:NBN:IT:UNIMI-363747