This thesis explores deep learning in nonstationary environments, specifically, it investigates how to make neural networks and their training more robust and adaptive to changes in time. Aiming for a more realistic setting and taking inspiration from the learning process in humans, this work focuses on online adaptation, proposing new methods that improve over the previous state of the art. To do so, we focus on the intersection between time series analysis and online continual learning, using instruments and ideas from both to make a multidisciplinary improvement. First, we propose GAS-Norm, a new method that makes use of a robust version of score-driven models to adaptively normalize the input of a neural network, making it more adaptable after training. We propose this method for time series forecasting tasks, where large shifts in the mean and the variance of the input are common. Second, we focus on the optimization algorithm itself, aiming to improve online continual learning for image classification. In such settings, new classes and domains can disrupt previous knowledge, creating instabilities. We propose to augment replay using second-order information and an explicit stability constraint to address the problem. Online Curvature-Aware Replay, our proposal, shows strong performance in both class and domain incremental settings, using the Fisher information matrix to approximate curvature information. Finally, we propose an extension of OCAR for online time series forecasting. We draw a connection between score-driven models and natural gradient descent, and we show that the combination of a Student t distributional assumption with natural gradient descent results in a bounded and robust update. When compared with other online continual learning approaches for time series forecasting, our proposal shows improvements in multiple datasets. All methods are theoretically justified and empirically validated with public code.

Robust Adaptive Learning in Non-Stationary Environments: from Time Series to Continual Learning

URETTINI, EDOARDO
2025

Abstract

This thesis explores deep learning in nonstationary environments, specifically, it investigates how to make neural networks and their training more robust and adaptive to changes in time. Aiming for a more realistic setting and taking inspiration from the learning process in humans, this work focuses on online adaptation, proposing new methods that improve over the previous state of the art. To do so, we focus on the intersection between time series analysis and online continual learning, using instruments and ideas from both to make a multidisciplinary improvement. First, we propose GAS-Norm, a new method that makes use of a robust version of score-driven models to adaptively normalize the input of a neural network, making it more adaptable after training. We propose this method for time series forecasting tasks, where large shifts in the mean and the variance of the input are common. Second, we focus on the optimization algorithm itself, aiming to improve online continual learning for image classification. In such settings, new classes and domains can disrupt previous knowledge, creating instabilities. We propose to augment replay using second-order information and an explicit stability constraint to address the problem. Online Curvature-Aware Replay, our proposal, shows strong performance in both class and domain incremental settings, using the Fisher information matrix to approximate curvature information. Finally, we propose an extension of OCAR for online time series forecasting. We draw a connection between score-driven models and natural gradient descent, and we show that the combination of a Student t distributional assumption with natural gradient descent results in a bounded and robust update. When compared with other online continual learning approaches for time series forecasting, our proposal shows improvements in multiple datasets. All methods are theoretically justified and empirically validated with public code.
3-dic-2025
Inglese
Continual Learning
Time Series Forecasting
Deep Learning
Carta, Antonio
Bacciu, Davide
Lomonaco, Vincenzo
File in questo prodotto:
File Dimensione Formato  
Thesis_Urettini_final.pdf

accesso aperto

Licenza: Creative Commons
Dimensione 3.98 MB
Formato Adobe PDF
3.98 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/354154
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-354154