Neurodevelopmental disorders (NDDs) are complex and heterogeneous conditions whose diagnosis still relies largely on subjective behavioral evaluation, leading to misdiagnosis and delays. The goal of this PhD thesis is to explore how Artificial Intelligence (AI), and in particular Deep Learning (DL) combined with explainability techniques, can contribute to a more objective, transparent, and early identification of these disorders through the analysis of behavioral and neurophysiological data. The research focuses in particular on Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and adopts a multimodal perspective, integrating eye gaze, gait analysis, and electroencephalographic (EEG) signals: data modalities that reflect clinically validated biomarkers of neurodevelopmental functioning. Each domain is investigated through dedicated methodological frameworks that couple high-performance architectures with model interpretability to achieve both predictive accuracy and clinical trustworthiness. For gaze-based ASD detection, bimodal DL architectures and Graph Attention Networks fuse stimulus features with scanpaths, achieving state-of-the-art accuracy while providing interpretable visual attributions aligned with established behavioral clinical evidence. For gait, convolutional and graph architectures over 3D joint trajectories capture spatio-temporal dependencies and reveal distinctive inter-joint coordination patterns consistent with clinical reports. For EEG, a hyperdimensional-computing pipeline enables data-efficient ADHD classification on limited cohorts, with prototype-level interpretability and robustness to noise. Finally, an EEG-conditioned diffusion framework reconstructs visual representations coherent with the eliciting stimuli and demonstrates advantages over adversarial baselines in perceptual quality and semantic agreement. Across all studies, explainability techniques were employed in a systematic way to understand model reasoning, confirm the relevance of extracted features, and enhance clinical interpretability, with results suggesting that objective, multimodal, and interpretable AI is feasible for NDD assessment, while external validation, model calibration, and prospective evaluation remain prerequisites for clinical integration. This work contributes both methodologically and ethically towards a more transparent, explainable, and equitable integration of AI into neurodevelopmental research and healthcare.

Deep learning for assisted and interpretable recognition of neurodevelopmental disorders

COLONNESE, FEDERICA
2026

Abstract

Neurodevelopmental disorders (NDDs) are complex and heterogeneous conditions whose diagnosis still relies largely on subjective behavioral evaluation, leading to misdiagnosis and delays. The goal of this PhD thesis is to explore how Artificial Intelligence (AI), and in particular Deep Learning (DL) combined with explainability techniques, can contribute to a more objective, transparent, and early identification of these disorders through the analysis of behavioral and neurophysiological data. The research focuses in particular on Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD), and adopts a multimodal perspective, integrating eye gaze, gait analysis, and electroencephalographic (EEG) signals: data modalities that reflect clinically validated biomarkers of neurodevelopmental functioning. Each domain is investigated through dedicated methodological frameworks that couple high-performance architectures with model interpretability to achieve both predictive accuracy and clinical trustworthiness. For gaze-based ASD detection, bimodal DL architectures and Graph Attention Networks fuse stimulus features with scanpaths, achieving state-of-the-art accuracy while providing interpretable visual attributions aligned with established behavioral clinical evidence. For gait, convolutional and graph architectures over 3D joint trajectories capture spatio-temporal dependencies and reveal distinctive inter-joint coordination patterns consistent with clinical reports. For EEG, a hyperdimensional-computing pipeline enables data-efficient ADHD classification on limited cohorts, with prototype-level interpretability and robustness to noise. Finally, an EEG-conditioned diffusion framework reconstructs visual representations coherent with the eliciting stimuli and demonstrates advantages over adversarial baselines in perceptual quality and semantic agreement. Across all studies, explainability techniques were employed in a systematic way to understand model reasoning, confirm the relevance of extracted features, and enhance clinical interpretability, with results suggesting that objective, multimodal, and interpretable AI is feasible for NDD assessment, while external validation, model calibration, and prospective evaluation remain prerequisites for clinical integration. This work contributes both methodologically and ethically towards a more transparent, explainable, and equitable integration of AI into neurodevelopmental research and healthcare.
20-gen-2026
Inglese
PANELLA, Massimo
Università degli Studi di Roma "La Sapienza"
File in questo prodotto:
File Dimensione Formato  
Tesi_dottorato_Colonnese.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 38.54 MB
Formato Adobe PDF
38.54 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/361062
Il codice NBN di questa tesi è URN:NBN:IT:UNIROMA1-361062