The rapid development of Artificial Intelligence and Deep Learning techniques has profoundly transformed a wide range of application domains, from medical diagnos- tics to environmental monitoring. However, several challenges remain in adapting advanced models to domains where data are scarce, heterogeneous, or require strong interpretability. This dissertation presents a series of research contributions addressing these issues across three main axes. First, in the medical domain, where I mainly investigated the intersection of Raman spectroscopy and AI for cancer diagnostics, providing a comprehensive survey and critical analysis of how machine learning can support can- cer grading. This work highlights both the potential and the limitations of applying AI methods in sensitive applications, where ethical considerations are essential. Second, I explored time series forecasting in environmental contexts, with a focus on air quality and sea temperature signals. Leveraging deep learning and signal transformation techniques, I developed methods that improve predictive accuracy and robustness in environmental monitoring tasks, contributing to the development of reliable tools for sustainability and public health. Finally, my research turned towards explainable artificial intelligence, evaluating the role of explanations as both a tool for knowledge distillation and a possible vulner- ability. I investigated how explainability can support model compression while also examining the risks of explanation manipulation, thus contributing to the growing discussion on trustworthiness and reliability of AI systems. Together, these contributions illustrate multiple trajectories from domain-specific applications toward methodological advances in generalizability and performance. Beyond the immediate findings, this dissertation underscores the adaptability and breadth of deep learning approaches across diverse domains, thereby contributing to the continued advancement of artificial intelligence.

Deep learning across domains: from forecasting environmental signals and classifying medical data to explainable AI systems

MANGANELLI CONFORTI, PIETRO
2026

Abstract

The rapid development of Artificial Intelligence and Deep Learning techniques has profoundly transformed a wide range of application domains, from medical diagnos- tics to environmental monitoring. However, several challenges remain in adapting advanced models to domains where data are scarce, heterogeneous, or require strong interpretability. This dissertation presents a series of research contributions addressing these issues across three main axes. First, in the medical domain, where I mainly investigated the intersection of Raman spectroscopy and AI for cancer diagnostics, providing a comprehensive survey and critical analysis of how machine learning can support can- cer grading. This work highlights both the potential and the limitations of applying AI methods in sensitive applications, where ethical considerations are essential. Second, I explored time series forecasting in environmental contexts, with a focus on air quality and sea temperature signals. Leveraging deep learning and signal transformation techniques, I developed methods that improve predictive accuracy and robustness in environmental monitoring tasks, contributing to the development of reliable tools for sustainability and public health. Finally, my research turned towards explainable artificial intelligence, evaluating the role of explanations as both a tool for knowledge distillation and a possible vulner- ability. I investigated how explainability can support model compression while also examining the risks of explanation manipulation, thus contributing to the growing discussion on trustworthiness and reliability of AI systems. Together, these contributions illustrate multiple trajectories from domain-specific applications toward methodological advances in generalizability and performance. Beyond the immediate findings, this dissertation underscores the adaptability and breadth of deep learning approaches across diverse domains, thereby contributing to the continued advancement of artificial intelligence.
29-gen-2026
Inglese
Italiano
Grigioni, Mauro
SCHAERF, Marco
RUSSO, PAOLO
Università degli Studi di Roma "La Sapienza"
File in questo prodotto:
File Dimensione Formato  
Tesi_dottorato_ManganelliConforti.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 21.29 MB
Formato Adobe PDF
21.29 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/358531
Il codice NBN di questa tesi è URN:NBN:IT:UNIROMA1-358531