As Artificial Intelligence (AI) technologies increasingly permeate our daily lives and societies, numerous ethical concerns and privacy risks emerge. Regulations such as the GDPR (2016) and the AI Act (2024) have been introduced to address these challenges; however, regulations alone may not be sufficient to fully protect humans from certain risks while allowing them to harness the benefits of these technologies. As these systems grow more sophisticated and embedded in decision-making processes, the urgency to address ethical and privacy-related questions becomes even more pressing. In particular, the asymmetry of power between technology providers and end-users raises fundamental concerns about autonomy, consent, and accountability. In this thesis, we address the issue of empowering human agents in their interaction with the digital world from a dual perspective. On one hand, the aim is to empower users with software tools that safeguard their ethical preferences and protect their privacy requirements. On the other hand, we strive to contribute to the assessment of the societal impact of AI technologies, a crucial step for fostering, hindering, or regulating their development. By integrating both individual-level protections and system-level evaluations, the approach aims to foster a more balanced human-AI relationship. To achieve these goals, Part I focuses on the elicitation of human ethical preferences, to be implemented into a software exoskeleton that protects humans in their interaction with the digital world (Exosoul). This is accomplished through the design of two questionnaires, grounded in two theoretical frameworks: one rooted in moral psychology and the other in moral philosophy. These tools are intended to translate abstract moral principles into actionable insights that are embedded into the exoskeleton. The result is a personalized ethical filter that reflects users’ values in real-time digital interactions. Part II, meanwhile, explores privacy concerns related to human self-disclosure across different digital platforms (such as GitHub and Amazon). It then introduces the design of technologies intended to mitigate users' self-disclosure by detecting sensitive information and sanitizing texts. This proactive approach to privacy protection aims to reduce unintended data leaks while maintaining user agency. Finally, Part III tackles the challenge of assessing the impact of AI technologies. It begins by clarifying the terminology used to describe the augmentation or replacement of human abilities, and then proposes an evaluation framework aimed at fostering more responsible design practices and enabling a more informed adoption of these technologies. The framework is applied to Alexa as a case study to explore how AI systems can augment or replace human abilities. This framework is intended not only for policymakers but also for designers and engineers, who play a central role in shaping AI’s trajectory. In this thesis, empowerment is conceptualized in a broader sense. On one hand, it involves augmenting human abilities by equipping individuals with software tools that act as protective "shields", mediating their interactions with artificial agents. On the other hand, it emphasizes ensuring transparency in the impact of these technologies while fostering the development and adoption of more ethical AI practices. Ultimately, this work aspires to contribute to a future where AI supports—not undermines—human dignity, autonomy, and societal well-being.
Ethics and Privacy in the Age of AI: Challenges and Solutions for Human Empowerment
ALFIERI, COSTANZA
2025
Abstract
As Artificial Intelligence (AI) technologies increasingly permeate our daily lives and societies, numerous ethical concerns and privacy risks emerge. Regulations such as the GDPR (2016) and the AI Act (2024) have been introduced to address these challenges; however, regulations alone may not be sufficient to fully protect humans from certain risks while allowing them to harness the benefits of these technologies. As these systems grow more sophisticated and embedded in decision-making processes, the urgency to address ethical and privacy-related questions becomes even more pressing. In particular, the asymmetry of power between technology providers and end-users raises fundamental concerns about autonomy, consent, and accountability. In this thesis, we address the issue of empowering human agents in their interaction with the digital world from a dual perspective. On one hand, the aim is to empower users with software tools that safeguard their ethical preferences and protect their privacy requirements. On the other hand, we strive to contribute to the assessment of the societal impact of AI technologies, a crucial step for fostering, hindering, or regulating their development. By integrating both individual-level protections and system-level evaluations, the approach aims to foster a more balanced human-AI relationship. To achieve these goals, Part I focuses on the elicitation of human ethical preferences, to be implemented into a software exoskeleton that protects humans in their interaction with the digital world (Exosoul). This is accomplished through the design of two questionnaires, grounded in two theoretical frameworks: one rooted in moral psychology and the other in moral philosophy. These tools are intended to translate abstract moral principles into actionable insights that are embedded into the exoskeleton. The result is a personalized ethical filter that reflects users’ values in real-time digital interactions. Part II, meanwhile, explores privacy concerns related to human self-disclosure across different digital platforms (such as GitHub and Amazon). It then introduces the design of technologies intended to mitigate users' self-disclosure by detecting sensitive information and sanitizing texts. This proactive approach to privacy protection aims to reduce unintended data leaks while maintaining user agency. Finally, Part III tackles the challenge of assessing the impact of AI technologies. It begins by clarifying the terminology used to describe the augmentation or replacement of human abilities, and then proposes an evaluation framework aimed at fostering more responsible design practices and enabling a more informed adoption of these technologies. The framework is applied to Alexa as a case study to explore how AI systems can augment or replace human abilities. This framework is intended not only for policymakers but also for designers and engineers, who play a central role in shaping AI’s trajectory. In this thesis, empowerment is conceptualized in a broader sense. On one hand, it involves augmenting human abilities by equipping individuals with software tools that act as protective "shields", mediating their interactions with artificial agents. On the other hand, it emphasizes ensuring transparency in the impact of these technologies while fostering the development and adoption of more ethical AI practices. Ultimately, this work aspires to contribute to a future where AI supports—not undermines—human dignity, autonomy, and societal well-being.| File | Dimensione | Formato | |
|---|---|---|---|
|
PhDAI_adapted_8_1.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
7.97 MB
Formato
Adobe PDF
|
7.97 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/300980
URN:NBN:IT:UNIPI-300980