This thesis presents a real-time visual odometry system which makes use of multiple views. The motion estimation system has been designed for a concrete implementation on a computer vision processor specifically intended for autonomous driving. An autonomous vehicle has to work in different environments characterized by peculiar issues and, of course, it has to provide the same level of safety in any condition, therefore accuracy and reliability are essential requirements for such a system. Moreover, the hardware-related constraints led to conceive a lightweight method with small memory requirements that does not make use of maps. The proposed approach uses feature correspondences from consecutive frames. The keypoints are extracted from the image using the Harris corner detector and then a BRIEF-like descriptor is used for the comparison operation. A fundamental characteristic of the proposed method is the robustness against outliers, which is provided by a motion estimator specifically designed to reject them through the use of a weight function. A multiple view approach has been implemented, in order to increase the accuracy and the reliability of the system. The multiple view paradigm is exploited both in time and space, indeed the motion estimation is computed using one or more cameras and then it is further optimized considering a set of the last acquired frames through Sparse Bundle Adjustment. The system is therefore quite modular and it can be tailored to the particular setup of the autonomous vehicle. An open dataset widely used in literature has been used to evaluate the performance of the proposed method in terms of accuracy and to compare it with the top-ranked state of the art methods, showing comparable results. The system has been tested in different configurations on a dataset specifically acquired for its validation, in order to highlight the contributions of the multiple view approach proposed. The results underlined how the use of multiple cameras significantly reduces the overall translation error and how the use of multiple frames considerably decreases the rotation error.

Multiple view odometry for autonomous driving

2019

Abstract

This thesis presents a real-time visual odometry system which makes use of multiple views. The motion estimation system has been designed for a concrete implementation on a computer vision processor specifically intended for autonomous driving. An autonomous vehicle has to work in different environments characterized by peculiar issues and, of course, it has to provide the same level of safety in any condition, therefore accuracy and reliability are essential requirements for such a system. Moreover, the hardware-related constraints led to conceive a lightweight method with small memory requirements that does not make use of maps. The proposed approach uses feature correspondences from consecutive frames. The keypoints are extracted from the image using the Harris corner detector and then a BRIEF-like descriptor is used for the comparison operation. A fundamental characteristic of the proposed method is the robustness against outliers, which is provided by a motion estimator specifically designed to reject them through the use of a weight function. A multiple view approach has been implemented, in order to increase the accuracy and the reliability of the system. The multiple view paradigm is exploited both in time and space, indeed the motion estimation is computed using one or more cameras and then it is further optimized considering a set of the last acquired frames through Sparse Bundle Adjustment. The system is therefore quite modular and it can be tailored to the particular setup of the autonomous vehicle. An open dataset widely used in literature has been used to evaluate the performance of the proposed method in terms of accuracy and to compare it with the top-ranked state of the art methods, showing comparable results. The system has been tested in different configurations on a dataset specifically acquired for its validation, in order to highlight the contributions of the multiple view approach proposed. The results underlined how the use of multiple cameras significantly reduces the overall translation error and how the use of multiple frames considerably decreases the rotation error.
Odometria tramite viste multiple per guida autonoma
mar-2019
Inglese
visual odometry
computer vision
autonomous driving
ING-INF/05
Università degli Studi di Parma
File in questo prodotto:
File Dimensione Formato  
relazione-finale-schema.pdf

accesso solo da BNCF e BNCR

Tipologia: Altro materiale allegato
Dimensione 57.62 kB
Formato Adobe PDF
57.62 kB Adobe PDF
thesis.pdf

accesso solo da BNCF e BNCR

Tipologia: Altro materiale allegato
Dimensione 15.05 MB
Formato Adobe PDF
15.05 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/134738
Il codice NBN di questa tesi è URN:NBN:IT:UNIPR-134738