This thesis investigates online optimization and its connections to modern neural architectures through a control–theoretic approach. In classical optimization, the goal is to minimize a fixed cost function. In the online setting, instead, the problem must be solved as the objective and constraints evolve over time. This transforms the problem into a dynamic one, in which tools and principles from control theory become relevant to ensure convergence of the optimization error. Rather than directly applying standard control techniques, this work proposes new algorithms that reinterpret control-theoretic ideas within the optimization framework and its associated challenges. One of the most compelling areas of application is in modern deep learning, where recent architectures such as Transformers and State Space Models exhibit in-context learning abilities. These mechanisms can be formalized as online regression problems, offering a principled way to analyze and improve such models. In parallel, the second part of the thesis develops and studies brain-inspired neural architectures. Starting from neuroscience-motivated principles, we investigate properties of both dynamical models of brain activity and artificial brain-inspired architectures, and we leverage control-theoretic tools for their analysis and synthesis. Overall, the thesis advances a framework for online optimization and, in parallel, offers a complementary investigation of modern neural architectures.

From Online Optimization to Neural State Space Models: A Control-Inspired Approach to Learning

CASTI, UMBERTO
2026

Abstract

This thesis investigates online optimization and its connections to modern neural architectures through a control–theoretic approach. In classical optimization, the goal is to minimize a fixed cost function. In the online setting, instead, the problem must be solved as the objective and constraints evolve over time. This transforms the problem into a dynamic one, in which tools and principles from control theory become relevant to ensure convergence of the optimization error. Rather than directly applying standard control techniques, this work proposes new algorithms that reinterpret control-theoretic ideas within the optimization framework and its associated challenges. One of the most compelling areas of application is in modern deep learning, where recent architectures such as Transformers and State Space Models exhibit in-context learning abilities. These mechanisms can be formalized as online regression problems, offering a principled way to analyze and improve such models. In parallel, the second part of the thesis develops and studies brain-inspired neural architectures. Starting from neuroscience-motivated principles, we investigate properties of both dynamical models of brain activity and artificial brain-inspired architectures, and we leverage control-theoretic tools for their analysis and synthesis. Overall, the thesis advances a framework for online optimization and, in parallel, offers a complementary investigation of modern neural architectures.
19-feb-2026
Inglese
ZAMPIERI, SANDRO
Università degli studi di Padova
File in questo prodotto:
File Dimensione Formato  
tesi_definitiva_Umberto_Casti.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 4.48 MB
Formato Adobe PDF
4.48 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/361054
Il codice NBN di questa tesi è URN:NBN:IT:UNIPD-361054