Computer vision and geometry processing are often see as two different and, in a certain sense, distant fields: the first one works on two-dimensional data, while the other needs three dimensional information. But are 2D and 3D data really disconnected? Think about the human vision: each eye captures patterns of light, that are then used by the brain in order to reconstruct the perception of the observed scene. In a similar way, if the eye detects a variation in the patterns of light, we are able to understand that the scene is not static; therefore, we're able to perceive the motion of one or more object in the scene. In this work, we'll show how the perception of the 2D motion can be used in order to solve two significant problems, both dealing with three-dimensional data. In the first part, we'll show how the so-called optical flow, representing the observed motion, can be used to estimate the alignment error of a set of digital cameras looking to the same object. In the second part, we'll see how the detected 2D motion of an object can be used to better understand its underlying geometric structure by means of detecting its rigid parts and the way they are connected.

Perception and Motion: use of Computer Vision to solve Geometry Processing problems

2012

Abstract

Computer vision and geometry processing are often see as two different and, in a certain sense, distant fields: the first one works on two-dimensional data, while the other needs three dimensional information. But are 2D and 3D data really disconnected? Think about the human vision: each eye captures patterns of light, that are then used by the brain in order to reconstruct the perception of the observed scene. In a similar way, if the eye detects a variation in the patterns of light, we are able to understand that the scene is not static; therefore, we're able to perceive the motion of one or more object in the scene. In this work, we'll show how the perception of the 2D motion can be used in order to solve two significant problems, both dealing with three-dimensional data. In the first part, we'll show how the so-called optical flow, representing the observed motion, can be used to estimate the alignment error of a set of digital cameras looking to the same object. In the second part, we'll see how the detected 2D motion of an object can be used to better understand its underlying geometric structure by means of detecting its rigid parts and the way they are connected.
2012
it
File in questo prodotto:
File Dimensione Formato  
Stefano_Marras_PHD_tesi.pdf

accesso solo da BNCF e BNCR

Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati
Dimensione 2.46 MB
Formato Adobe PDF
2.46 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/325204
Il codice NBN di questa tesi è URN:NBN:IT:BNCF-325204