The integration of artificial intelligence systems into society is continually increasing, thanks to their rapid solutions that are often more efficient than those of humans. Since these technologies process personal data, they need to be robust and secure. This thesis examines these crucial factors, investigating the vulnerabilities that these systems may encounter in real-world scenarios. A threat that has inspired significant research activity in recent years is the phenomenon of deepfakes. Their widespread dissemination, ease of generation, and high quality make them among the most dangerous threats. Discrediting a person, spreading false news, and jeopardizing digital identities have become alarmingly accessible. Although studies demonstrate solutions with satisfactory results, their weakness when compression is used cannot be overlooked. Files shared via apps or social media are compressed and resized to facilitate rapid delivery, inadvertently triggering a sort of attack on deepfake detectors. In addition to accidental data sharing issues, it is also important to be aware of those generated by targeted attacks. Focusing on voice disorder detection systems, which are becoming increasingly important in public health, machine learning is increasingly present and crucial in distinguishing between healthy and pathological voices. A significant concern, given the critical nature of personal health information, is that these systems can be deceived by various types of attacks, such as adversarial, evasion, and pitch-based methods. However, attacks are not only about changing a classification; they can also be implemented to obtain confidential information. For this reason, the rapid expansion of smartphones, which collect extensive personal and behavioral data, poses new risks to digital identity. Applications that utilize mobile sensor data for activity, health, or security monitoring may inadvertently enable the identification of individuals or devices. This thesis tackles these issues by (1) presenting innovative methods to mitigate performance losses in deepfake detectors under compression, (2) providing an in-depth analysis of vulnerabilities in voice disorder detection systems to enhance defenses against attacks, and (3) highlighting potential privacy concerns in human activity recognition systems.

Unmasking Deepfakes, Adversarial Attacks, and Mobile Users in The Deep Learning Era

PERELLI, GIANPAOLO
2026

Abstract

The integration of artificial intelligence systems into society is continually increasing, thanks to their rapid solutions that are often more efficient than those of humans. Since these technologies process personal data, they need to be robust and secure. This thesis examines these crucial factors, investigating the vulnerabilities that these systems may encounter in real-world scenarios. A threat that has inspired significant research activity in recent years is the phenomenon of deepfakes. Their widespread dissemination, ease of generation, and high quality make them among the most dangerous threats. Discrediting a person, spreading false news, and jeopardizing digital identities have become alarmingly accessible. Although studies demonstrate solutions with satisfactory results, their weakness when compression is used cannot be overlooked. Files shared via apps or social media are compressed and resized to facilitate rapid delivery, inadvertently triggering a sort of attack on deepfake detectors. In addition to accidental data sharing issues, it is also important to be aware of those generated by targeted attacks. Focusing on voice disorder detection systems, which are becoming increasingly important in public health, machine learning is increasingly present and crucial in distinguishing between healthy and pathological voices. A significant concern, given the critical nature of personal health information, is that these systems can be deceived by various types of attacks, such as adversarial, evasion, and pitch-based methods. However, attacks are not only about changing a classification; they can also be implemented to obtain confidential information. For this reason, the rapid expansion of smartphones, which collect extensive personal and behavioral data, poses new risks to digital identity. Applications that utilize mobile sensor data for activity, health, or security monitoring may inadvertently enable the identification of individuals or devices. This thesis tackles these issues by (1) presenting innovative methods to mitigate performance losses in deepfake detectors under compression, (2) providing an in-depth analysis of vulnerabilities in voice disorder detection systems to enhance defenses against attacks, and (3) highlighting potential privacy concerns in human activity recognition systems.
27-feb-2026
Inglese
PUGLISI, GIOVANNI
MARCIALIS, GIAN LUCA
Università degli Studi di Cagliari
File in questo prodotto:
File Dimensione Formato  
Tesi_di_dottorato_Gianpaolo Perelli.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 41.55 MB
Formato Adobe PDF
41.55 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/359478
Il codice NBN di questa tesi è URN:NBN:IT:UNICA-359478