This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.

This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.

Interpretable Time Series Classification via Temporal Logic Embeddings

FERFOGLIA, IRENE
2026

Abstract

This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.
25-feb-2026
Inglese
This thesis addresses the problem of interpretability in time series classification, a domain where state-of-the-art deep learning models achieve high predictive performance at the cost of transparency. In safety-critical and regulated application areas, such as healthcare, industrial monitoring, and cyber-physical systems, opaque decision-making processes hinder trust, accountability, and practical deployment. Existing explainability approaches for time series are predominantly post-hoc, often unstable and weakly grounded semantically, and do not guarantee faithfulness to the model’s actual reasoning. The primary objective of this thesis is to develop a time series classification framework in which interpretability is embedded directly into the learning process. Rather than explaining predictions after training, the proposed approach aims to produce predictions that are inherently expressed in terms of human-understandable temporal concepts. To achieve this, the thesis investigates the integration of Signal Temporal Logic (STL) into modern learning architectures. STL provides a formally grounded and expressive language for describing temporal properties such as sustained behaviours over time and bounded temporal constraints, which are central to many real-world time series phenomena. The main contribution of the thesis is STELLE (Signal Temporal Logic Embedding for Logically-grounded Learning and Explanation), a neuro-symbolic architecture for interpretable time series classification. STELLE maps raw time series trajectories into a concept space defined by STL formulae using a novel trajectory embedding kernel based on STL robustness. This kernel establishes a quantitative and differentiable link between continuous signals and symbolic temporal concepts, enabling joint optimisation of classification performance and interpretability. STELLE is interpretable by design and produces explanations directly from its internal structure. The framework supports both local explanations, which characterise individual predictions, and global explanations, which capture class-level temporal patterns, without relying on additional post-hoc methods. The resulting explanations are concise, semantically meaningful, and faithful to the classifier’s decision process. An extensive experimental evaluation on univariate and multivariate benchmarks demonstrates that STELLE achieves competitive accuracy with respect to state-of-the-art time series classification methods, while providing logically grounded explanations. Through ablation and sensitivity studies, the thesis analyses the impact of key architectural and design choices, illustrating the trade-offs between expressiveness, interpretability, and performance. Overall, this work contributes a principled neuro-symbolic framework that advances interpretable and trustworthy learning for time series data.
Time Series; Explainability; Deep Learning; Neuro-symbolic; Temporal Logic
NENZI, LAURA
BORTOLUSSI, LUCA
Università degli Studi di Trieste
File in questo prodotto:
File Dimensione Formato  
FERFOGLIA_PhD_Thesis.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 17.55 MB
Formato Adobe PDF
17.55 MB Adobe PDF Visualizza/Apri
FERFOGLIA_PhD_Thesis_1.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 17.55 MB
Formato Adobe PDF
17.55 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/362748
Il codice NBN di questa tesi è URN:NBN:IT:UNITS-362748