In an era characterized by the pervasive integration of artificial intelligence into decision-making processes across diverse industries, the demand for trust has never been more pronounced. This thesis embarks on a comprehensive exploration of bias and fairness, with a particular emphasis on their ramifications within the banking sector, where AI-driven decisions bear substantial societal consequences. In this context, the seamless integration of fairness, explainability, and human oversight is of utmost importance, culminating in the establishment of what is commonly referred to as "Responsible AI". This emphasizes the critical nature of addressing biases within the development of a corporate culture that aligns seamlessly with both AI regulations and universal human rights standards, particularly in the realm of automated decision-making systems. The thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias. Within the realm of understanding bias, we introduce Bias On Demand, a model framework that enables the generation of synthetic data to illustrate various types of bias. Additionally, we analyze the intricate landscape of fairness metrics, shedding light on their interconnectedness and nuances. Transitioning to the domain of mitigating bias, we present BeFair, a versatile toolkit designed to enable the practical implementation of fairness in real-world scenarios. Moreover, we developed FFTree, a transparent and flexible fairness-aware decision tree. In the sphere of accounting for bias, we propose a structured roadmap for addressing fairness in the banking sector, underscoring the importance of interdisciplinary collaboration to holistically address contextual and societal implications. We also introduce FairView, a novel tool that supports users to select ethical frameworks when addressing fairness. Our investigation into the dynamic nature of fairness over time has culminated in the development of FairX, a strategy based on eXplainable AI adept at monitoring fairness trends. These contributions are validated through their practical application in real-world scenarios, in collaboration with Intesa Sanpaolo. This collaborative effort not only contributes to our understanding of fairness but also provides practical tools for the responsible implementation of AI-based decision-making systems. In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages, further promoting progress in the field of AI fairness.
In an era characterized by the pervasive integration of artificial intelligence into decision-making processes across diverse industries, the demand for trust has never been more pronounced. This thesis embarks on a comprehensive exploration of bias and fairness, with a particular emphasis on their ramifications within the banking sector, where AI-driven decisions bear substantial societal consequences. In this context, the seamless integration of fairness, explainability, and human oversight is of utmost importance, culminating in the establishment of what is commonly referred to as "Responsible AI". This emphasizes the critical nature of addressing biases within the development of a corporate culture that aligns seamlessly with both AI regulations and universal human rights standards, particularly in the realm of automated decision-making systems. The thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias. Within the realm of understanding bias, we introduce Bias On Demand, a model framework that enables the generation of synthetic data to illustrate various types of bias. Additionally, we analyze the intricate landscape of fairness metrics, shedding light on their interconnectedness and nuances. Transitioning to the domain of mitigating bias, we present BeFair, a versatile toolkit designed to enable the practical implementation of fairness in real-world scenarios. Moreover, we developed FFTree, a transparent and flexible fairness-aware decision tree. In the sphere of accounting for bias, we propose a structured roadmap for addressing fairness in the banking sector, underscoring the importance of interdisciplinary collaboration to holistically address contextual and societal implications. We also introduce FairView, a novel tool that supports users to select ethical frameworks when addressing fairness. Our investigation into the dynamic nature of fairness over time has culminated in the development of FairX, a strategy based on eXplainable AI adept at monitoring fairness trends. These contributions are validated through their practical application in real-world scenarios, in collaboration with Intesa Sanpaolo. This collaborative effort not only contributes to our understanding of fairness but also provides practical tools for the responsible implementation of AI-based decision-making systems. In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages, further promoting progress in the field of AI fairness.
Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making
CASTELNOVO, ALESSANDRO
2024
Abstract
In an era characterized by the pervasive integration of artificial intelligence into decision-making processes across diverse industries, the demand for trust has never been more pronounced. This thesis embarks on a comprehensive exploration of bias and fairness, with a particular emphasis on their ramifications within the banking sector, where AI-driven decisions bear substantial societal consequences. In this context, the seamless integration of fairness, explainability, and human oversight is of utmost importance, culminating in the establishment of what is commonly referred to as "Responsible AI". This emphasizes the critical nature of addressing biases within the development of a corporate culture that aligns seamlessly with both AI regulations and universal human rights standards, particularly in the realm of automated decision-making systems. The thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias. Within the realm of understanding bias, we introduce Bias On Demand, a model framework that enables the generation of synthetic data to illustrate various types of bias. Additionally, we analyze the intricate landscape of fairness metrics, shedding light on their interconnectedness and nuances. Transitioning to the domain of mitigating bias, we present BeFair, a versatile toolkit designed to enable the practical implementation of fairness in real-world scenarios. Moreover, we developed FFTree, a transparent and flexible fairness-aware decision tree. In the sphere of accounting for bias, we propose a structured roadmap for addressing fairness in the banking sector, underscoring the importance of interdisciplinary collaboration to holistically address contextual and societal implications. We also introduce FairView, a novel tool that supports users to select ethical frameworks when addressing fairness. Our investigation into the dynamic nature of fairness over time has culminated in the development of FairX, a strategy based on eXplainable AI adept at monitoring fairness trends. These contributions are validated through their practical application in real-world scenarios, in collaboration with Intesa Sanpaolo. This collaborative effort not only contributes to our understanding of fairness but also provides practical tools for the responsible implementation of AI-based decision-making systems. In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages, further promoting progress in the field of AI fairness.File | Dimensione | Formato | |
---|---|---|---|
phd_unimib_736101.pdf
accesso aperto
Dimensione
2.44 MB
Formato
Adobe PDF
|
2.44 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/172757
URN:NBN:IT:UNIMIB-172757