Reinforcement Learning offers a powerful paradigm for solving sequential decision-making problems, particularly in environments characterized by complex dynamics, partial and noisy observations, or the need for coordinated behavior among multiple agents. Despite remarkable progress, contemporary Reinforcement Learning methods often struggle to acquire general-purpose behaviors that reliably transfer across tasks, especially in non-trivial and realistic settings. In contrast, unsupervised pre-training has become a fundamental driver of generalization in non-sequential domains, as demonstrated by the success of Large Language Models trained on massive unlabeled corpora. Inspired by this paradigm, unsupervised pre-training in Reinforcement Learning has recently emerged as a promising approach to improve generalization across diverse tasks. This framework typically unfolds in two phases: an initial phase in which the agent interacts with the environment without any task-specific supervision, followed by a fine-tuning phase where the acquired knowledge is adapted to a specific downstream objective. This two-phase approach allows agents to first interact freely with the environment to acquire transferable knowledge, which can later be fine-tuned for specific downstream tasks. Despite its promise, prior research on unsupervised pre-training in Reinforcement Learning has remained largely confined to simplified settings, often involving a single agent with full access to the environment’s state, or focused narrowly on representation learning under partial observability. This thesis broadens this scope by building on the empirical success of Maximum State Entropy methods in fully observable, single-agent settings, and extending their applicability to more challenging and realistic domains, specifically those involving partial observability and multiple agents. Thus, our approach to unsupervised pre-training is centered on the maximization of state entropy, with the goal of inducing policies that generate diverse and informative state distributions, even when the true state is hidden or distributed across agents. Through a combination of theoretical analysis and empirical validation, this work generalizes entropy-based objectives to complex scenarios, laying the groundwork for a principled and scalable framework for unsupervised pre-training. Ultimately, our goal is to support more scalable and general-purpose Reinforcement Learning systems applicable to real-world domains.
Il Reinforcement Learning rappresenta un paradigma estremamente efficace per affrontare problemi decisionali sequenziali, soprattutto in contesti caratterizzati da dinamiche complesse, osservazioni parziali, o dalla necessità di comportamenti coordinati tra più agenti. Nonostante i notevoli progressi, gli approcci attuali al Reinforcement Learning faticano ancora a sviluppare comportamenti generali, in grado di trasferirsi con affidabilità da un compito all’altro, in particolare in scenari realistici. Al contrario, il pre-training non supervisionato si è affermato come un fattore determinante per la generalizzazione in domini non sequenziali, come dimostrato dal successo dei Large Language Models addestrati su vasti insiemi di dati non etichettati. Ispirandosi a questo paradigma, il pre-training non supervisionato nel Reinforcement Learning è emerso come una strategia promettente per favorire la generalizzazione su un ampio spettro di tasks. Tale approccio si articola generalmente in due fasi: una fase preliminare in cui l’agente interagisce liberamente con l’ambiente senza vincoli specifici, e una successiva fase di fine-tuning in cui le conoscenze acquisite vengono adattate a un obiettivo specifico. Questo schema a due fasi consente di esplorare l’ambiente in modo autonomo per acquisire conoscenze trasferibili, che possono poi essere raffinate in funzione di compiti mirati. Nonostante il suo potenziale, la ricerca sul pre-training non supervisionato nel Reinforcement Learning è rimasta finora confinata a contesti semplificati, spesso limitati a un singolo agente con piena osservabilità dello stato dell’ambiente. Questa tesi amplia tale prospettiva, facendo leva sul successo empirico dei metodi di Maximum State Entropy in scenari con un singolo agente e piena osservabilità, ed estendendone l’applicazione a contesti più complessi e realistici, caratterizzati da osservazioni parziali e dalla presenza di più agenti. L’approccio proposto si fonda dunque sulla massimizzazione dell’entropia degli stati, con l’obiettivo di indurre politiche capaci di generare distribuzioni di stati diversificate e informative, anche quando lo stato reale è nascosto o distribuito tra diversi agenti. Attraverso un’analisi teorica combinata a una solida validazione empirica, questo lavoro estende gli obiettivi basati sull’entropia a scenari complessi, ponendo le basi per un framework rigoroso e scalabile di pre-training non supervisionato. In ultima analisi, l’obiettivo è contribuire allo sviluppo di sistemi di Reinforcement Learning più generali e scalabili.
New directions in pre-training for reinforcement learning
RICCARDO, ZAMBONI
2025
Abstract
Reinforcement Learning offers a powerful paradigm for solving sequential decision-making problems, particularly in environments characterized by complex dynamics, partial and noisy observations, or the need for coordinated behavior among multiple agents. Despite remarkable progress, contemporary Reinforcement Learning methods often struggle to acquire general-purpose behaviors that reliably transfer across tasks, especially in non-trivial and realistic settings. In contrast, unsupervised pre-training has become a fundamental driver of generalization in non-sequential domains, as demonstrated by the success of Large Language Models trained on massive unlabeled corpora. Inspired by this paradigm, unsupervised pre-training in Reinforcement Learning has recently emerged as a promising approach to improve generalization across diverse tasks. This framework typically unfolds in two phases: an initial phase in which the agent interacts with the environment without any task-specific supervision, followed by a fine-tuning phase where the acquired knowledge is adapted to a specific downstream objective. This two-phase approach allows agents to first interact freely with the environment to acquire transferable knowledge, which can later be fine-tuned for specific downstream tasks. Despite its promise, prior research on unsupervised pre-training in Reinforcement Learning has remained largely confined to simplified settings, often involving a single agent with full access to the environment’s state, or focused narrowly on representation learning under partial observability. This thesis broadens this scope by building on the empirical success of Maximum State Entropy methods in fully observable, single-agent settings, and extending their applicability to more challenging and realistic domains, specifically those involving partial observability and multiple agents. Thus, our approach to unsupervised pre-training is centered on the maximization of state entropy, with the goal of inducing policies that generate diverse and informative state distributions, even when the true state is hidden or distributed across agents. Through a combination of theoretical analysis and empirical validation, this work generalizes entropy-based objectives to complex scenarios, laying the groundwork for a principled and scalable framework for unsupervised pre-training. Ultimately, our goal is to support more scalable and general-purpose Reinforcement Learning systems applicable to real-world domains.| File | Dimensione | Formato | |
|---|---|---|---|
|
NewDirectionsRZ.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
18.12 MB
Formato
Adobe PDF
|
18.12 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/307689
URN:NBN:IT:POLIMI-307689