In recent years, Autonomous Underwater Vehicles (AUVs) and Remotely Operated Underwater Vehicles (ROUVs) have become essential for various underwater missions, including environmental monitoring, offshore resource exploration, structural inspection, and rescue operations. A primary challenge in these missions is precise underwater navigation, particularly during structure inspections, where acoustic and optical systems offer complementary benefits. These tasks require building a local 3D map of the object using inertial systems, a process complicated by the fact that global navigation methods like LBL (Long Base Line) or USBL (Ultra-Short Base Line) are often impractical due to acoustic signal transmission limitations from surface beacons. While optical systems deliver detailed object information, they are hindered by issues like stereo vision distortions, lighting variability, and noise from suspended particles. Conversely, Forward Looking Sonar (FLS) provides robustness against environmental noise but operates at lower resolution and in polar coordinates, complicating data alignment with optical data devices like stereo camera. At the first part of our research we developed a nonlinear model of the ROUV in Computer-Aided Design (CAD) application and identified essential parameters, such as hydrostatic and hydrodynamic coefficients, through Computational Fluid Dynamics (CFD) simulations. These findings enabled accurate modeling of dynamic behaviors, including Coriolis effects and damping forces, forming the basis of the vehicle’s control equations. Additionally, a fractional-order PI controller was designed for yaw control, derived from a 6 DoF nonlinear model of the ROUV. The integration of Robot Operating System (ROS) and the Gazebo simulator allowed testing of control algorithms and sensor interactions within a digital twin environment, which included an Inertial Measurement Unit (IMU), FLS, Doppler Velocity Log (DVL) and stereo camera. This setup facilitated a smooth transition of navigation, computer vision, and path-planning functionalities from simulation to real-world deployment. Based on the designed nonlinear model of the ROUV and the developed control algorithm, we integrated a multimodal inertial-visual mapping and navigation system. This system enabled us to analyze the impact of the vehicle’s movement on the accuracy and stability of the visual-inertial system in a dynamically changing environment. Mapping, navigation, and control of the underwater vehicle equipped with a visual-inertial system that merges multimodal data from FLS, DVL, IMU, and a stereo camera devices through unsupervised deep-learning detector and descriptor algorithm is a core study of this research. Our research focuses on the geometric and computational integration of optical and sonar images. Given that a classical computer vision and supervised deep-learning feature matching algorithms is inadequate for opti-acoustic data fusion, we introduce an unsupervised feature matching approach tailored to sonar and optical datasets. This novel method enhances motion estimation by utilizing a hybrid direct and indirect matching strategy together with opti-acoustic epipolar constrains, where sonar data refines the depth of optical visual features. Both Sonar-to-Optical and Optical-to-Sonar mappings are employed to improve feature matching reliability under varying lighting and turbidity conditions. In our research, we introduced opti-acoustic image processing, including a calibration approach for each device and both devices the sonar and stereo camera, along with imaging enhancements to improve data fusion quality. To further enhance degraded optical images, we implemented a method combining stereo calibration in air and further data improving via physically guided underwater image enhancement framework based on synthetic and real images integration. By integrating unsupervised deep learning algorithm we was able to map sonar features onto optical images, enhancing image depth estimation while preserving key details. Testing across various scene geometries showed substantial improvements in visual odometry accuracy, which is crucial for effective navigation and inspection in challenging underwater conditions. Lastly, this research presents a robust framework for simulating underwater missions by combining CFD-based dynamics modeling, control algorithms, and sensor fusion techniques with optical and acoustic data. This comprehensive platform addresses underwater navigation and 3D mapping complexities, providing a unified system for testing and optimizing ROUV operations in diverse environments.
Advanced embedded systems for autonomous robots control
Svishchev, Nikolai
2025
Abstract
In recent years, Autonomous Underwater Vehicles (AUVs) and Remotely Operated Underwater Vehicles (ROUVs) have become essential for various underwater missions, including environmental monitoring, offshore resource exploration, structural inspection, and rescue operations. A primary challenge in these missions is precise underwater navigation, particularly during structure inspections, where acoustic and optical systems offer complementary benefits. These tasks require building a local 3D map of the object using inertial systems, a process complicated by the fact that global navigation methods like LBL (Long Base Line) or USBL (Ultra-Short Base Line) are often impractical due to acoustic signal transmission limitations from surface beacons. While optical systems deliver detailed object information, they are hindered by issues like stereo vision distortions, lighting variability, and noise from suspended particles. Conversely, Forward Looking Sonar (FLS) provides robustness against environmental noise but operates at lower resolution and in polar coordinates, complicating data alignment with optical data devices like stereo camera. At the first part of our research we developed a nonlinear model of the ROUV in Computer-Aided Design (CAD) application and identified essential parameters, such as hydrostatic and hydrodynamic coefficients, through Computational Fluid Dynamics (CFD) simulations. These findings enabled accurate modeling of dynamic behaviors, including Coriolis effects and damping forces, forming the basis of the vehicle’s control equations. Additionally, a fractional-order PI controller was designed for yaw control, derived from a 6 DoF nonlinear model of the ROUV. The integration of Robot Operating System (ROS) and the Gazebo simulator allowed testing of control algorithms and sensor interactions within a digital twin environment, which included an Inertial Measurement Unit (IMU), FLS, Doppler Velocity Log (DVL) and stereo camera. This setup facilitated a smooth transition of navigation, computer vision, and path-planning functionalities from simulation to real-world deployment. Based on the designed nonlinear model of the ROUV and the developed control algorithm, we integrated a multimodal inertial-visual mapping and navigation system. This system enabled us to analyze the impact of the vehicle’s movement on the accuracy and stability of the visual-inertial system in a dynamically changing environment. Mapping, navigation, and control of the underwater vehicle equipped with a visual-inertial system that merges multimodal data from FLS, DVL, IMU, and a stereo camera devices through unsupervised deep-learning detector and descriptor algorithm is a core study of this research. Our research focuses on the geometric and computational integration of optical and sonar images. Given that a classical computer vision and supervised deep-learning feature matching algorithms is inadequate for opti-acoustic data fusion, we introduce an unsupervised feature matching approach tailored to sonar and optical datasets. This novel method enhances motion estimation by utilizing a hybrid direct and indirect matching strategy together with opti-acoustic epipolar constrains, where sonar data refines the depth of optical visual features. Both Sonar-to-Optical and Optical-to-Sonar mappings are employed to improve feature matching reliability under varying lighting and turbidity conditions. In our research, we introduced opti-acoustic image processing, including a calibration approach for each device and both devices the sonar and stereo camera, along with imaging enhancements to improve data fusion quality. To further enhance degraded optical images, we implemented a method combining stereo calibration in air and further data improving via physically guided underwater image enhancement framework based on synthetic and real images integration. By integrating unsupervised deep learning algorithm we was able to map sonar features onto optical images, enhancing image depth estimation while preserving key details. Testing across various scene geometries showed substantial improvements in visual odometry accuracy, which is crucial for effective navigation and inspection in challenging underwater conditions. Lastly, this research presents a robust framework for simulating underwater missions by combining CFD-based dynamics modeling, control algorithms, and sensor fusion techniques with optical and acoustic data. This comprehensive platform addresses underwater navigation and 3D mapping complexities, providing a unified system for testing and optimizing ROUV operations in diverse environments.File | Dimensione | Formato | |
---|---|---|---|
TesiPhD_SVISHCHEV_DEF.pdf
accesso aperto
Dimensione
11.91 MB
Formato
Adobe PDF
|
11.91 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/189079
URN:NBN:IT:POLIBA-189079