Stochastic constrained control and planning of autonomous systems under uncertainty are key challenges in modern control theory, driven by the growing demand for reliable and efficient autonomy. This thesis addresses these challenges by proposing a novel hierarchical control architecture for the constrained optimal control and planning of stochastic systems, with a focus on complex physical platforms such as aircraft and coordinated fleets. The core innovation lies in the integration and enhancement of decision-making frameworks---specifically, the use of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs)---within a hierarchical structure that separates low-level control and high-level planning. The low-level controller is tailored for handling nonlinear aircraft dynamics, while the high-level planner utilizes an online Monte Carlo Tree Search (MCTS) algorithm to solve MDPs under stochastic uncertainty. A key contribution is the adaptation of MCTS to operate effectively within the MDP/POMDP frameworks, including the development of formal guarantees for its optimality and bounded sub-optimality. This enables robust planning in environments with noisy sensor measurements and partial observability. The use of the POMDP framework for integrated state estimation and uncertainty-aware decision-making represents an advancement in applying probabilistic planning methods to continuous, safety-critical domains. The thesis also contributes to the design of the overall control architecture, which allows for modularity, scalability, and real-time feasibility across a diverse set of scenarios—including autonomous navigation, multi-agent coordination, and warehouse planning. By explicitly exploiting knowledge of the system dynamics in the planning process, the approach enhances planning accuracy and efficiency compared to traditional black-box or reactive methods. Through these contributions, the thesis advances both the theoretical foundations and practical implementations of robust control for autonomous systems, offering a unified framework that bridges the gap between planning under uncertainty and nonlinear control.

Stochastic Constrained Optimal Control and Planning via Monte Carlo Tree Search for Autonomous Aircraft Systems

TROTTI, FRANCESCO
2025

Abstract

Stochastic constrained control and planning of autonomous systems under uncertainty are key challenges in modern control theory, driven by the growing demand for reliable and efficient autonomy. This thesis addresses these challenges by proposing a novel hierarchical control architecture for the constrained optimal control and planning of stochastic systems, with a focus on complex physical platforms such as aircraft and coordinated fleets. The core innovation lies in the integration and enhancement of decision-making frameworks---specifically, the use of Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs)---within a hierarchical structure that separates low-level control and high-level planning. The low-level controller is tailored for handling nonlinear aircraft dynamics, while the high-level planner utilizes an online Monte Carlo Tree Search (MCTS) algorithm to solve MDPs under stochastic uncertainty. A key contribution is the adaptation of MCTS to operate effectively within the MDP/POMDP frameworks, including the development of formal guarantees for its optimality and bounded sub-optimality. This enables robust planning in environments with noisy sensor measurements and partial observability. The use of the POMDP framework for integrated state estimation and uncertainty-aware decision-making represents an advancement in applying probabilistic planning methods to continuous, safety-critical domains. The thesis also contributes to the design of the overall control architecture, which allows for modularity, scalability, and real-time feasibility across a diverse set of scenarios—including autonomous navigation, multi-agent coordination, and warehouse planning. By explicitly exploiting knowledge of the system dynamics in the planning process, the approach enhances planning accuracy and efficiency compared to traditional black-box or reactive methods. Through these contributions, the thesis advances both the theoretical foundations and practical implementations of robust control for autonomous systems, offering a unified framework that bridges the gap between planning under uncertainty and nonlinear control.
2025
Inglese
Stochastic Optimal Control, Optimal Control, Markov Decision Process, Autonomous Systems, Aircraft Planning and Control
137
File in questo prodotto:
File Dimensione Formato  
Trotti_PhD_thesis_sign.pdf

accesso aperto

Dimensione 14.55 MB
Formato Adobe PDF
14.55 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/208943
Il codice NBN di questa tesi è URN:NBN:IT:UNIVR-208943