In recent years, deep-learning approaches for digital pathology have proven to be effective in image analysis tasks such as classification. Despite the promising results, the adoption of such approaches in clinical practice is still limited due to two major issues. First, there is a lack of annotated datasets required to train and evaluate deep learning algorithms. Annotating large datasets is expensive and hard to achieve especially for the scarcity of expert pathologists willing to do such a time-consuming task. Secondly, the outcomes of deep-learning approaches are difficult to comprehend and assess, due to the black-box nature of the models involved. Nevertheless, in the digital pathology domain, pathologists should be able to understand why a specific outcome has been determined in order to trust model predictions. Moreover, explainable artificial intelligence is not only desirable but is also a mandatory requirement according to recent regulations such as the European General Data Protection Regulation (GDPR). According to recent studies, pathologists prefer visual explanations for algorithms' outcomes, clearly indicating the scientific claims supporting each prediction. Information visualization and visual analytics methods could be used to allow pathologists to visually comprehend machine predictions, by means of intuitive explanation interfaces. Despite other domains, such as radiology, have already benefited from using these techniques, their employment in the digital pathology domain is still limited. In this thesis, we tackle the former issues by synergically combining different computational, analytics, and visual approaches to support diagnostics and the decision-making process in digital pathology. Firstly, we propose the Semantic Knowledge Extractor Tool (SKET) for the knowledge extraction process from free-text pathology reports. SKET automatically generates weak annotations that are used to train a deep-learning-based image classification system for digital pathology. Secondly, we propose SKET eXplained (SKET X), an explainability tool that exploits visual analytics techniques to visually explain the outputs of SKET. Then, we introduce MedTAG, a customizable annotation tool for clinical reports, with the purpose of facilitating the creation of consistent and permanent ground truth labels and speeding up the burdensome annotation task. To this aim, MedTAG integrates SKET for automatic annotation facilities. Moreover, we propose NanoWeb for the exploration of the knowledge generated by the interconnected network of scientific facts extracted from the literature and encoded as machine-readable statements within the Linked Data paradigm. Finally, we integrate our contributions into the ExaSURE System for Unified Resource Exploration (ExaSURE) ecosystem for unified access in a web-based fashion. The ExaSURE ecosystem represents a step forward for integrating algorithmic and visual digital tools in clinical practice to support pathologists' work in their daily routines.

In recent years, deep-learning approaches for digital pathology have proven to be effective in image analysis tasks such as classification. Despite the promising results, the adoption of such approaches in clinical practice is still limited due to two major issues. First, there is a lack of annotated datasets required to train and evaluate deep learning algorithms. Annotating large datasets is expensive and hard to achieve especially for the scarcity of expert pathologists willing to do such a time-consuming task. Secondly, the outcomes of deep-learning approaches are difficult to comprehend and assess, due to the black-box nature of the models involved. Nevertheless, in the digital pathology domain, pathologists should be able to understand why a specific outcome has been determined in order to trust model predictions. Moreover, explainable artificial intelligence is not only desirable but is also a mandatory requirement according to recent regulations such as the European General Data Protection Regulation (GDPR). According to recent studies, pathologists prefer visual explanations for algorithms' outcomes, clearly indicating the scientific claims supporting each prediction. Information visualization and visual analytics methods could be used to allow pathologists to visually comprehend machine predictions, by means of intuitive explanation interfaces. Despite other domains, such as radiology, have already benefited from using these techniques, their employment in the digital pathology domain is still limited. In this thesis, we tackle the former issues by synergically combining different computational, analytics, and visual approaches to support diagnostics and the decision-making process in digital pathology. Firstly, we propose the Semantic Knowledge Extractor Tool (SKET) for the knowledge extraction process from free-text pathology reports. SKET automatically generates weak annotations that are used to train a deep-learning-based image classification system for digital pathology. Secondly, we propose SKET eXplained (SKET X), an explainability tool that exploits visual analytics techniques to visually explain the outputs of SKET. Then, we introduce MedTAG, a customizable annotation tool for clinical reports, with the purpose of facilitating the creation of consistent and permanent ground truth labels and speeding up the burdensome annotation task. To this aim, MedTAG integrates SKET for automatic annotation facilities. Moreover, we propose NanoWeb for the exploration of the knowledge generated by the interconnected network of scientific facts extracted from the literature and encoded as machine-readable statements within the Linked Data paradigm. Finally, we integrate our contributions into the ExaSURE System for Unified Resource Exploration (ExaSURE) ecosystem for unified access in a web-based fashion. The ExaSURE ecosystem represents a step forward for integrating algorithmic and visual digital tools in clinical practice to support pathologists' work in their daily routines.

Bridging Information Access and Visual Analytics Methods for Supporting the Decision Process in the Digital Pathology Domain

GIACHELLE, FABIO
2023

Abstract

In recent years, deep-learning approaches for digital pathology have proven to be effective in image analysis tasks such as classification. Despite the promising results, the adoption of such approaches in clinical practice is still limited due to two major issues. First, there is a lack of annotated datasets required to train and evaluate deep learning algorithms. Annotating large datasets is expensive and hard to achieve especially for the scarcity of expert pathologists willing to do such a time-consuming task. Secondly, the outcomes of deep-learning approaches are difficult to comprehend and assess, due to the black-box nature of the models involved. Nevertheless, in the digital pathology domain, pathologists should be able to understand why a specific outcome has been determined in order to trust model predictions. Moreover, explainable artificial intelligence is not only desirable but is also a mandatory requirement according to recent regulations such as the European General Data Protection Regulation (GDPR). According to recent studies, pathologists prefer visual explanations for algorithms' outcomes, clearly indicating the scientific claims supporting each prediction. Information visualization and visual analytics methods could be used to allow pathologists to visually comprehend machine predictions, by means of intuitive explanation interfaces. Despite other domains, such as radiology, have already benefited from using these techniques, their employment in the digital pathology domain is still limited. In this thesis, we tackle the former issues by synergically combining different computational, analytics, and visual approaches to support diagnostics and the decision-making process in digital pathology. Firstly, we propose the Semantic Knowledge Extractor Tool (SKET) for the knowledge extraction process from free-text pathology reports. SKET automatically generates weak annotations that are used to train a deep-learning-based image classification system for digital pathology. Secondly, we propose SKET eXplained (SKET X), an explainability tool that exploits visual analytics techniques to visually explain the outputs of SKET. Then, we introduce MedTAG, a customizable annotation tool for clinical reports, with the purpose of facilitating the creation of consistent and permanent ground truth labels and speeding up the burdensome annotation task. To this aim, MedTAG integrates SKET for automatic annotation facilities. Moreover, we propose NanoWeb for the exploration of the knowledge generated by the interconnected network of scientific facts extracted from the literature and encoded as machine-readable statements within the Linked Data paradigm. Finally, we integrate our contributions into the ExaSURE System for Unified Resource Exploration (ExaSURE) ecosystem for unified access in a web-based fashion. The ExaSURE ecosystem represents a step forward for integrating algorithmic and visual digital tools in clinical practice to support pathologists' work in their daily routines.
20-mar-2023
Inglese
In recent years, deep-learning approaches for digital pathology have proven to be effective in image analysis tasks such as classification. Despite the promising results, the adoption of such approaches in clinical practice is still limited due to two major issues. First, there is a lack of annotated datasets required to train and evaluate deep learning algorithms. Annotating large datasets is expensive and hard to achieve especially for the scarcity of expert pathologists willing to do such a time-consuming task. Secondly, the outcomes of deep-learning approaches are difficult to comprehend and assess, due to the black-box nature of the models involved. Nevertheless, in the digital pathology domain, pathologists should be able to understand why a specific outcome has been determined in order to trust model predictions. Moreover, explainable artificial intelligence is not only desirable but is also a mandatory requirement according to recent regulations such as the European General Data Protection Regulation (GDPR). According to recent studies, pathologists prefer visual explanations for algorithms' outcomes, clearly indicating the scientific claims supporting each prediction. Information visualization and visual analytics methods could be used to allow pathologists to visually comprehend machine predictions, by means of intuitive explanation interfaces. Despite other domains, such as radiology, have already benefited from using these techniques, their employment in the digital pathology domain is still limited. In this thesis, we tackle the former issues by synergically combining different computational, analytics, and visual approaches to support diagnostics and the decision-making process in digital pathology. Firstly, we propose the Semantic Knowledge Extractor Tool (SKET) for the knowledge extraction process from free-text pathology reports. SKET automatically generates weak annotations that are used to train a deep-learning-based image classification system for digital pathology. Secondly, we propose SKET eXplained (SKET X), an explainability tool that exploits visual analytics techniques to visually explain the outputs of SKET. Then, we introduce MedTAG, a customizable annotation tool for clinical reports, with the purpose of facilitating the creation of consistent and permanent ground truth labels and speeding up the burdensome annotation task. To this aim, MedTAG integrates SKET for automatic annotation facilities. Moreover, we propose NanoWeb for the exploration of the knowledge generated by the interconnected network of scientific facts extracted from the literature and encoded as machine-readable statements within the Linked Data paradigm. Finally, we integrate our contributions into the ExaSURE System for Unified Resource Exploration (ExaSURE) ecosystem for unified access in a web-based fashion. The ExaSURE ecosystem represents a step forward for integrating algorithmic and visual digital tools in clinical practice to support pathologists' work in their daily routines.
SILVELLO, GIANMARIA
Università degli studi di Padova
File in questo prodotto:
File Dimensione Formato  
tesi_definitiva_Fabio_Giachelle.pdf

accesso aperto

Dimensione 25.91 MB
Formato Adobe PDF
25.91 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/98384
Il codice NBN di questa tesi è URN:NBN:IT:UNIPD-98384