This Ph.D. research is focused on the development of Artificial Intelligence (AI) methods, based both on Machine Learning (ML) and Deep Learning (DL) algorithms, to automatically analyze radiological images and build predictive models for diagnosis and prognosis. The research problem that this thesis aims to address is the main issue of lack of transparency and interpretability that still prevents the real adoption of AI systems in clinical practice. An investigation of possible solutions to the limitations that currently characterize the medical imaging applications of AI is presented. The main goal of the present thesis is not only to develop high-performance models but also to propose examples of potential methods to make AI systems more explainable, and trustworthy. The applications described in this thesis are focused on the COVID-19 use case. It represented an excellent use case to address the issues mentioned above for the current social and clinical relevance and, consequently, for the availability of multi-centric public data. However, the idea is that the described methods may be refined to be applied to other diseases. Three main applications are presented following the common thread of explainability of the developed analysis, addressed in its different aspects and forms. The main applications described are: 1) The development of a fully automated pipeline based on three deep neural networks (DNNs) with a last multi-input model trained both on CXR images and clinical variables, to finally predict the severity outcome for the COVID-19 pathology, distinguishing between severe cases. I focused on explaining the prediction through a visualization technique. 2) The optimization and the multi-center validation of an AI quantification system developed to characterize COVID-19 lesions quantitatively. The algorithm was oriented to generate quantitative indexes representing qualitative features that clinicians could immediately understand. 3) The development of an ML pipeline based on radiomics features extracted from CT scan volumes to predict the clinical prognosis of COVID-19 patients. In this study, different aspects related to robustness and harmonization for bias reduction have been addressed. An a-posteriori study has also been conducted on phantom images to characterize CT scanners in terms of image quality and find a method to harmonize data from different centers improving features robustness.

Implementation and optimization of explainable and trustworthy Artificial Intelligence algorithms for the analysis of radiological images

SCAPICCHIO, CAMILLA
2024

Abstract

This Ph.D. research is focused on the development of Artificial Intelligence (AI) methods, based both on Machine Learning (ML) and Deep Learning (DL) algorithms, to automatically analyze radiological images and build predictive models for diagnosis and prognosis. The research problem that this thesis aims to address is the main issue of lack of transparency and interpretability that still prevents the real adoption of AI systems in clinical practice. An investigation of possible solutions to the limitations that currently characterize the medical imaging applications of AI is presented. The main goal of the present thesis is not only to develop high-performance models but also to propose examples of potential methods to make AI systems more explainable, and trustworthy. The applications described in this thesis are focused on the COVID-19 use case. It represented an excellent use case to address the issues mentioned above for the current social and clinical relevance and, consequently, for the availability of multi-centric public data. However, the idea is that the described methods may be refined to be applied to other diseases. Three main applications are presented following the common thread of explainability of the developed analysis, addressed in its different aspects and forms. The main applications described are: 1) The development of a fully automated pipeline based on three deep neural networks (DNNs) with a last multi-input model trained both on CXR images and clinical variables, to finally predict the severity outcome for the COVID-19 pathology, distinguishing between severe cases. I focused on explaining the prediction through a visualization technique. 2) The optimization and the multi-center validation of an AI quantification system developed to characterize COVID-19 lesions quantitatively. The algorithm was oriented to generate quantitative indexes representing qualitative features that clinicians could immediately understand. 3) The development of an ML pipeline based on radiomics features extracted from CT scan volumes to predict the clinical prognosis of COVID-19 patients. In this study, different aspects related to robustness and harmonization for bias reduction have been addressed. An a-posteriori study has also been conducted on phantom images to characterize CT scanners in terms of image quality and find a method to harmonize data from different centers improving features robustness.
30-gen-2024
Italiano
artificial intelligence
covid-19
explainability
medical imaging
trustworthiness
Fantacci, Maria Evelina
Neri, Emanuele
File in questo prodotto:
File Dimensione Formato  
Report_attivita_phd_CamillaScapicchio.pdf

non disponibili

Dimensione 77.83 kB
Formato Adobe PDF
77.83 kB Adobe PDF
Tesi_Dottorato_CamillaScapicchio.pdf

accesso aperto

Dimensione 38.53 MB
Formato Adobe PDF
38.53 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/216617
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-216617