Humans are becoming more reliant on the assistance of intelligent systems. These systems are being implemented to automatically analyze and correlate vast amounts of available data, including highly sensitive data, to produce accurate results used to drive even critical or strategic decisions. Nevertheless, these analysis systems entail some concerns related to trust, like respecting users’ privacy and considering transparency in data processing and decision-making. To build trust in AI systems, we endeavor to develop a methodology for improved modeling and respect of ethical, social, and legal aspects from both the AI system and the human perspectives. A deeper understanding of how AI systems impact trust, the guidelines and regulations for trustworthy AI, in addition to the requirements of a trustworthy AI and the recent implementation mechanisms, will serve to improve and build a systematic approach to enhancing AI systems’ trustworthiness. Recently, numerous initiatives have been made by organizations, governmental entities, etc. such as the European Commission and the Organisation for Economic Cooperation and Development (OECD) to analyze the impact of AI systems, and provide design principles and guidelines to model trustworthy AI systems. While the reasons for prioritizing trustworthy AI requirements like data privacy and model robustness may vary, the underlying principles show a common challenge. This is due to the fact that these requirements might be in contrast, like enhancing the privacy aspect might cause the loss of the model’s robustness. Still, these elements are of crucial importance as they pose the basis for the Trustworthy AI paradigm. To that end, we propose analyzing Trustworthy AI principles and requirements, investigating the applicability of implementing related mechanisms for trust within the phases of the AI life-cycle, studying the relationships among trustworthy AI requirements, and developing methodologies to trade off these requirements and finding out an optimal trade-off among all requirements and their implementation configurations. This study is intended to address multiple aspects of the Trust modeling problem in AI systems. First, we consider the Trustworthy AI requirements provided by the "Ethics Guidelines for Trustworthy AI" as key aspects of the analysis process. Thus, we apply related mechanisms within the entire AI lifecycle phases. As such, we expect a contrast between the mechanisms applied to achieve AI trust principles. Therefore, we propose trade-off criteria among the measures of trust for these mechanisms by regulating their input parameters and analyzing the effect for trade-off optimization.

Trustworthy AI in Practice: Modeling, Trade-offs, and Applications

ALABBASI, WESAM NITHAM
2025

Abstract

Humans are becoming more reliant on the assistance of intelligent systems. These systems are being implemented to automatically analyze and correlate vast amounts of available data, including highly sensitive data, to produce accurate results used to drive even critical or strategic decisions. Nevertheless, these analysis systems entail some concerns related to trust, like respecting users’ privacy and considering transparency in data processing and decision-making. To build trust in AI systems, we endeavor to develop a methodology for improved modeling and respect of ethical, social, and legal aspects from both the AI system and the human perspectives. A deeper understanding of how AI systems impact trust, the guidelines and regulations for trustworthy AI, in addition to the requirements of a trustworthy AI and the recent implementation mechanisms, will serve to improve and build a systematic approach to enhancing AI systems’ trustworthiness. Recently, numerous initiatives have been made by organizations, governmental entities, etc. such as the European Commission and the Organisation for Economic Cooperation and Development (OECD) to analyze the impact of AI systems, and provide design principles and guidelines to model trustworthy AI systems. While the reasons for prioritizing trustworthy AI requirements like data privacy and model robustness may vary, the underlying principles show a common challenge. This is due to the fact that these requirements might be in contrast, like enhancing the privacy aspect might cause the loss of the model’s robustness. Still, these elements are of crucial importance as they pose the basis for the Trustworthy AI paradigm. To that end, we propose analyzing Trustworthy AI principles and requirements, investigating the applicability of implementing related mechanisms for trust within the phases of the AI life-cycle, studying the relationships among trustworthy AI requirements, and developing methodologies to trade off these requirements and finding out an optimal trade-off among all requirements and their implementation configurations. This study is intended to address multiple aspects of the Trust modeling problem in AI systems. First, we consider the Trustworthy AI requirements provided by the "Ethics Guidelines for Trustworthy AI" as key aspects of the analysis process. Thus, we apply related mechanisms within the entire AI lifecycle phases. As such, we expect a contrast between the mechanisms applied to achieve AI trust principles. Therefore, we propose trade-off criteria among the measures of trust for these mechanisms by regulating their input parameters and analyzing the effect for trade-off optimization.
15-feb-2025
Italiano
Data Privacy
Explainable AI
Privacy-preserving Data Analysis
Trade-off
Trustworthy AI.
Bacciu, Davide
Saracino, Andrea
File in questo prodotto:
File Dimensione Formato  
Report_of_activities.pdf

non disponibili

Dimensione 35.81 kB
Formato Adobe PDF
35.81 kB Adobe PDF
Trustworthy_AI_in_Practice_Thesis.pdf

accesso aperto

Dimensione 15.4 MB
Formato Adobe PDF
15.4 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/216802
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-216802