The study of interacting entities—ranging from biological organisms to autonomous machines—offers a unifying framework for understanding the emergence of collective behavior in complex systems. Across disciplines such as physics, control theory, computer science, and the social sciences, a central challenge lies in capturing how local interactions among agents give rise to macroscopic patterns and decision dynamics. Deterministic models, while analytically tractable, often fail to account for the uncertainty and heterogeneity inherent in real-world environments. Stochastic approaches, particularly those based on Markovian frameworks, provide a principled alternative by representing agents as probabilistically transitioning between states under local interaction rules. This perspective enables the modeling of emergent phenomena such as consensus, competition, and polarization, while maintaining computational simplicity. Building upon this foundation, this thesis presents two main contributions that bridge modeling and control of interacting decision-making agents. The first concerns the problem of modeling, inferring, and forecasting agents’ decisions within dynamic environments. A Markov chain–based framework is developed to represent the evolution of agents’ decisions in a network, allowing for probabilistic characterization of behavioral transitions. This model is analyzed in two domains: (i) social networks, where agents are interconnected or influenced by environmental factors, and (ii) traffic scenarios, where a decision-making model for human drivers is proposed based on scenario-specific rules and empirically grounded behavioral observations. The second contribution focuses on the integration of these decision-making models within a control framework. The developed models are incorporated into closed-loop systems—particularly in traffic scenarios—where an autonomous agent acts as the actuator of a collective control strategy. In addition, an alternative controller is designed for pedestrian environments, emphasizing collision avoidance under stochastic predictions of human obstacle trajectories. These implementations demonstrate how probabilistic models of behavior can inform adaptive, decentralized control mechanisms in complex, interactive settings. Ultimately, the investigation of interacting agents seeks to explain how simple local rules generate complex global behaviors, and how these processes can be shaped through decentralized, adaptive control strategies. By integrating stochastic modeling, decision theory, and behavioral insights, this research contributes to the design of autonomous and socially aware systems that operate safely, efficiently, and cooperatively within dynamic, human-centered environments.

Exploiting predictive capabilities in motion control for autonomous agents operating in crowded environments

Gaetan, Elisa
2026

Abstract

The study of interacting entities—ranging from biological organisms to autonomous machines—offers a unifying framework for understanding the emergence of collective behavior in complex systems. Across disciplines such as physics, control theory, computer science, and the social sciences, a central challenge lies in capturing how local interactions among agents give rise to macroscopic patterns and decision dynamics. Deterministic models, while analytically tractable, often fail to account for the uncertainty and heterogeneity inherent in real-world environments. Stochastic approaches, particularly those based on Markovian frameworks, provide a principled alternative by representing agents as probabilistically transitioning between states under local interaction rules. This perspective enables the modeling of emergent phenomena such as consensus, competition, and polarization, while maintaining computational simplicity. Building upon this foundation, this thesis presents two main contributions that bridge modeling and control of interacting decision-making agents. The first concerns the problem of modeling, inferring, and forecasting agents’ decisions within dynamic environments. A Markov chain–based framework is developed to represent the evolution of agents’ decisions in a network, allowing for probabilistic characterization of behavioral transitions. This model is analyzed in two domains: (i) social networks, where agents are interconnected or influenced by environmental factors, and (ii) traffic scenarios, where a decision-making model for human drivers is proposed based on scenario-specific rules and empirically grounded behavioral observations. The second contribution focuses on the integration of these decision-making models within a control framework. The developed models are incorporated into closed-loop systems—particularly in traffic scenarios—where an autonomous agent acts as the actuator of a collective control strategy. In addition, an alternative controller is designed for pedestrian environments, emphasizing collision avoidance under stochastic predictions of human obstacle trajectories. These implementations demonstrate how probabilistic models of behavior can inform adaptive, decentralized control mechanisms in complex, interactive settings. Ultimately, the investigation of interacting agents seeks to explain how simple local rules generate complex global behaviors, and how these processes can be shaped through decentralized, adaptive control strategies. By integrating stochastic modeling, decision theory, and behavioral insights, this research contributes to the design of autonomous and socially aware systems that operate safely, efficiently, and cooperatively within dynamic, human-centered environments.
2026
Inglese
Dotoli, Mariagrazia
Politecnico di Bari
File in questo prodotto:
File Dimensione Formato  
Gaetan_DAuSy_38cycle_FinalThesis_signed.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 8.22 MB
Formato Adobe PDF
8.22 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/354362
Il codice NBN di questa tesi è URN:NBN:IT:POLIBA-354362