Autonomous mobile robots rely on robust perception to safely navigate and interact with complex, dynamic environments. Accurate understanding of the surroundings is critical not only for obstacle avoidance and traversability assessment but also for higher-level tasks such as semantic reasoning, object recognition, and temporal monitoring. Traditional approaches to robotic perception often struggle to balance accuracy, computational efficiency, and generalization across diverse scenarios, particularly in unstructured or crowded environments where terrain, vegetation, and dynamic agents introduce significant variability. Furthermore, the development of learning-based perception methods is frequently constrained by the availability of large-scale, high-quality datasets, limiting their applicability in real-world settings. Efficient and adaptable perception solutions are therefore fundamental to enable robots to operate safely and autonomously, from urban streets and industrial settings to natural outdoor environments and indoor spaces. The key to addressing these challenges are the integration of multi-modal sensors, architectures capable of capturing fine-grained geometric and semantic information, and parameter-efficient training strategies. High-performance methods must also be deployable on embedded platforms, where computational and energy resources are limited, while remaining robust to domain shifts and variations in sensor data. In this thesis, we address these challenges by developing novel approaches for traversability analysis, 3D semantic and instance segmentation, place recognition, and temporal object re-identification, alongside the creation of a multi-sensor, large-scale dataset for complex outdoor environments. Our contributions improve both the accuracy and efficiency of robotic perception systems, providing practical methods and resources to advance real-time, robust scene understanding in autonomous mobile robotics. Most of the contributions presented in this thesis are available as open-source software packages. We believe that this will foster collaboration, enable precise repeatability of our experiments, and facilitate future research on robotic perception.

Learning-Based Perception Methods for Cross-Domain Autonomous Robotics Navigation and Environment Monitoring

FUSARO, DANIEL
2026

Abstract

Autonomous mobile robots rely on robust perception to safely navigate and interact with complex, dynamic environments. Accurate understanding of the surroundings is critical not only for obstacle avoidance and traversability assessment but also for higher-level tasks such as semantic reasoning, object recognition, and temporal monitoring. Traditional approaches to robotic perception often struggle to balance accuracy, computational efficiency, and generalization across diverse scenarios, particularly in unstructured or crowded environments where terrain, vegetation, and dynamic agents introduce significant variability. Furthermore, the development of learning-based perception methods is frequently constrained by the availability of large-scale, high-quality datasets, limiting their applicability in real-world settings. Efficient and adaptable perception solutions are therefore fundamental to enable robots to operate safely and autonomously, from urban streets and industrial settings to natural outdoor environments and indoor spaces. The key to addressing these challenges are the integration of multi-modal sensors, architectures capable of capturing fine-grained geometric and semantic information, and parameter-efficient training strategies. High-performance methods must also be deployable on embedded platforms, where computational and energy resources are limited, while remaining robust to domain shifts and variations in sensor data. In this thesis, we address these challenges by developing novel approaches for traversability analysis, 3D semantic and instance segmentation, place recognition, and temporal object re-identification, alongside the creation of a multi-sensor, large-scale dataset for complex outdoor environments. Our contributions improve both the accuracy and efficiency of robotic perception systems, providing practical methods and resources to advance real-time, robust scene understanding in autonomous mobile robotics. Most of the contributions presented in this thesis are available as open-source software packages. We believe that this will foster collaboration, enable precise repeatability of our experiments, and facilitate future research on robotic perception.
12-feb-2026
Inglese
PRETTO, ALBERTO
Università degli studi di Padova
File in questo prodotto:
File Dimensione Formato  
Daniel_Fusaro.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 25.56 MB
Formato Adobe PDF
25.56 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/359531
Il codice NBN di questa tesi è URN:NBN:IT:UNIPD-359531