The integration of artificial intelligence (AI) into medical imaging has the potential to revolutionize diagnostic processes and patient care. However, the "black-box" nature of some deep learning models poses significant challenges for clinical adoption, as medical professionals require transparency and interpretability to trust and effectively utilize these tools. This thesis aims to enhance the interpretability and explainability of deep learning models in medical imaging by incorporating both post-hoc explainability techniques and interpretable-by-design methods, with a particular focus on computational pathology. The research begins with a comprehensive review of the automation in pathology, highlighting how automated systems can improve the robustness and consistency of pathology practices. This review underscores the importance of automation as a foundation for integrating AI applications into clinical workflows, enabling the handling of large data volumes with high precision and laying the groundwork for reliable AI model development. Addressing the challenge of data scarcity—a significant barrier in training effective AI models—the thesis investigates the use of synthetic data generation in computational pathology. Synthetic histopathological images are created not only to augment existing datasets but also to evaluate their potential as substitutes for original datasets, thereby easing data sharing constraints. XAI techniques are applied to assess the usability of these synthetic images in training AI models. The findings reveal that, when properly validated, synthetic data can mitigate limitations due to data scarcity and enhance model performance in detecting and classifying pathological features. The research further delves into the quantification of tumor-infiltrating lymphocytes (TILs) by leveraging interpretable-by-design definition of the research. Rather than developing new segmentation models, the study utilizes existing segmentation outputs to calculate TIL scores in accordance with international guidelines. The design ensures that the segmentation masks used are accurate and interpretable, aligning AI-generated outputs with standardized clinical scoring systems. This method enhances the reliability and clinical relevance of the findings, providing valuable insights into the tumor microenvironment crucial for prognosis and personalized treatment strategies. Collectively, the studies presented in this thesis contribute to a coherent strategy aimed at advancing computational pathology through the integration of interpretability and explainability in deep learning models. By systematically addressing key challenges—automation, explainability, data scarcity, and standardization—the research works toward the overarching goal of enhancing diagnostic accuracy and efficiency in medical imaging. In conclusion, the thesis demonstrates that integrating interpretability and explainable techniques into deep learning models is both feasible and beneficial for clinical practice. The use of post-hoc explainability methods and interpretable-by-design approaches not only improves model transparency but also fosters trust among medical professionals. This work lays a foundation for future AI applications in medicine that prioritize transparency, reliability, and clinical alignment, ultimately contributing to improved patient outcomes and the advancement of personalized healthcare.
Exploiting Interpretability in AI-Driven Medical Imaging: Strategies for Transparency and Clinical Integration in Computational Pathology
Pozzi, Matteo
2025
Abstract
The integration of artificial intelligence (AI) into medical imaging has the potential to revolutionize diagnostic processes and patient care. However, the "black-box" nature of some deep learning models poses significant challenges for clinical adoption, as medical professionals require transparency and interpretability to trust and effectively utilize these tools. This thesis aims to enhance the interpretability and explainability of deep learning models in medical imaging by incorporating both post-hoc explainability techniques and interpretable-by-design methods, with a particular focus on computational pathology. The research begins with a comprehensive review of the automation in pathology, highlighting how automated systems can improve the robustness and consistency of pathology practices. This review underscores the importance of automation as a foundation for integrating AI applications into clinical workflows, enabling the handling of large data volumes with high precision and laying the groundwork for reliable AI model development. Addressing the challenge of data scarcity—a significant barrier in training effective AI models—the thesis investigates the use of synthetic data generation in computational pathology. Synthetic histopathological images are created not only to augment existing datasets but also to evaluate their potential as substitutes for original datasets, thereby easing data sharing constraints. XAI techniques are applied to assess the usability of these synthetic images in training AI models. The findings reveal that, when properly validated, synthetic data can mitigate limitations due to data scarcity and enhance model performance in detecting and classifying pathological features. The research further delves into the quantification of tumor-infiltrating lymphocytes (TILs) by leveraging interpretable-by-design definition of the research. Rather than developing new segmentation models, the study utilizes existing segmentation outputs to calculate TIL scores in accordance with international guidelines. The design ensures that the segmentation masks used are accurate and interpretable, aligning AI-generated outputs with standardized clinical scoring systems. This method enhances the reliability and clinical relevance of the findings, providing valuable insights into the tumor microenvironment crucial for prognosis and personalized treatment strategies. Collectively, the studies presented in this thesis contribute to a coherent strategy aimed at advancing computational pathology through the integration of interpretability and explainability in deep learning models. By systematically addressing key challenges—automation, explainability, data scarcity, and standardization—the research works toward the overarching goal of enhancing diagnostic accuracy and efficiency in medical imaging. In conclusion, the thesis demonstrates that integrating interpretability and explainable techniques into deep learning models is both feasible and beneficial for clinical practice. The use of post-hoc explainability methods and interpretable-by-design approaches not only improves model transparency but also fosters trust among medical professionals. This work lays a foundation for future AI applications in medicine that prioritize transparency, reliability, and clinical alignment, ultimately contributing to improved patient outcomes and the advancement of personalized healthcare.File | Dimensione | Formato | |
---|---|---|---|
PhD_thesis_Pozzi_Matteo-10-03-25.pdf
accesso aperto
Dimensione
46.92 MB
Formato
Adobe PDF
|
46.92 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/208965
URN:NBN:IT:UNITN-208965