The rapid advancements in AI, particularly in deep neural networks (DNNs), have prompted the research community to face complex safety and security challenges, which must be carefully addressed to ensure the correct integration of AI algorithms into human-centric systems. AI threats can range from intentionally-crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario. This thesis delves into the domain of AI threats and countermeasures, with a specific focus on computer vision applications in safety-critical environments like cyber-physical systems, autonomous robots, and self-driving cars. The main research areas explored in the thesis, within the context of trustworthy AI, include DNN testing and the design of novel real-world attacks and defenses in complex outdoor scenarios. Firstly, the thesis critically examines the landscape of DNN testing, with a particular focus on the application of coverage criteria, a concept adapted from the field of software engineering. In this context, we introduce a framework designed to utilize coverage criteria for monitoring the behavior of neural networks at run-time. This offers a novel methodological perspective on leveraging testing techniques to assess model behavior. Through an analysis of state-of-the-art approaches in coverage testing and the results obtained, the thesis dedicates significant effort to paving the way for future research directions. Then, in the realm of real-world adversarial attacks, the thesis reviews the literature on attack and defense strategies, highlighting the gaps in analysis for certain computer vision tasks and applications. Additionally, concerning the understanding of state-of-the-art defense mechanisms, the review underscores an insufficient awareness of the practical implications of current defense mechanisms when applied in safety-critical scenarios. Following these observations, the work focuses on developing real-world attacks against semantic segmentation tasks, providing a clear interpretation of the spatial robustness of DNNs. The proposed attack methodology is achieved through novel optimization approaches and the utilization of driving simulators. Subsequently, the thesis presents an in-depth study of novel interpretable defense mechanisms, founded on provable and robust analysis. In conclusion, this thesis offers an examination of AI threats from different perspectives, merging theoretical discussions with practical applications. It aims at expanding and reviewing the existing literature, stimulating further research, and enhancing the understanding of AI safety and security threats.

Towards Trustworthy AI: Understanding the Impact of AI Threats and Countermeasures

ROSSOLINI, GIULIO
2024

Abstract

The rapid advancements in AI, particularly in deep neural networks (DNNs), have prompted the research community to face complex safety and security challenges, which must be carefully addressed to ensure the correct integration of AI algorithms into human-centric systems. AI threats can range from intentionally-crafted samples, such as adversarial perturbations or real-world adversarial objects, to unexpected out-of-distribution samples. The presence of these threats raises numerous questions and considerations about the security vulnerabilities and safety requirements of the models and applications under analysis. Accordingly, it is crucial to thoroughly understand and design testing methodologies and mitigation strategies, taking into account specific aspects and requirements of each application scenario. This thesis delves into the domain of AI threats and countermeasures, with a specific focus on computer vision applications in safety-critical environments like cyber-physical systems, autonomous robots, and self-driving cars. The main research areas explored in the thesis, within the context of trustworthy AI, include DNN testing and the design of novel real-world attacks and defenses in complex outdoor scenarios. Firstly, the thesis critically examines the landscape of DNN testing, with a particular focus on the application of coverage criteria, a concept adapted from the field of software engineering. In this context, we introduce a framework designed to utilize coverage criteria for monitoring the behavior of neural networks at run-time. This offers a novel methodological perspective on leveraging testing techniques to assess model behavior. Through an analysis of state-of-the-art approaches in coverage testing and the results obtained, the thesis dedicates significant effort to paving the way for future research directions. Then, in the realm of real-world adversarial attacks, the thesis reviews the literature on attack and defense strategies, highlighting the gaps in analysis for certain computer vision tasks and applications. Additionally, concerning the understanding of state-of-the-art defense mechanisms, the review underscores an insufficient awareness of the practical implications of current defense mechanisms when applied in safety-critical scenarios. Following these observations, the work focuses on developing real-world attacks against semantic segmentation tasks, providing a clear interpretation of the spatial robustness of DNNs. The proposed attack methodology is achieved through novel optimization approaches and the utilization of driving simulators. Subsequently, the thesis presents an in-depth study of novel interpretable defense mechanisms, founded on provable and robust analysis. In conclusion, this thesis offers an examination of AI threats from different perspectives, merging theoretical discussions with practical applications. It aims at expanding and reviewing the existing literature, stimulating further research, and enhancing the understanding of AI safety and security threats.
19-mar-2024
Italiano
DNN Testing
Adversarial Attacks
Adversarial Defenses
Real-World Robustness
BUTTAZZO, GIORGIO CARLO
BIONDI, ALESSANDRO
YOUCHENG SUN
ROLI, FABIO
CUCINOTTA, TOMMASO
COCOCCIONI, MARCO
MAURA PINTOR
File in questo prodotto:
File Dimensione Formato  
rossolini_v3.pdf

embargo fino al 15/03/2027

Dimensione 5.23 MB
Formato Adobe PDF
5.23 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/217464
Il codice NBN di questa tesi è URN:NBN:IT:SSSUP-217464