Nowadays, there is a great interest of the Computer Graphics community towards 3D scanning technologies based on active and passive solutions. The market counts several low-cost devices and software that are affordable for everyone, giving the possibility of acquiring almost any object or environment in an easy and fast way also to users with limited technical knowledge and expertise. The acquisition output is complex 3D data, usually point clouds, that encodes not only the spatial and visibility information but also the reflected radiance of each point samples. This data are used for different purposes, from the simple visualization to the more complex processing for the computation of dense triangular models or to infer more semantic information. Even if the presence of these devices in the market started some years ago and is well established, less attention has been given on the interaction with this noisy data, both on color and geometry, and on the use of all input attributes in real-time rendering to approximate global illumination effects. The goal of this thesis is to address these problems and to propose new algorithms to render and interact with raw harvested 3D data. The thesis presents four main contributions. The first one is a new algorithm for the interaction with temporal 3D data, where the same scene is acquired multiple times in different points in time, which improves the visualization and perception of the most significant geometric changes, limiting the perceptual distortion of the noisy data. The second contribution is a new method to approximate the global illumination effects in a point cloud using the ambient occlusion, proposing an algorithm tailored specifically for point clouds, where missing visibility information limits the results of the traditional approach. The third contribution is an easy algorithm to expand the dynamic range of the color data of a Low Dynamic Range point cloud, the usual output of the low-cost devices, using a single High Dynamic Range (HDR) photo of the same scene. Finally, the last contribution is a new Point-Based Global Illumination algorithm that, taking in input an HDR point cloud expanded with the previous method, can compute in real-time the relighting of a virtual object placed inside the cloud.

Interaction and Rendering with Harvested 3D Data

2019

Abstract

Nowadays, there is a great interest of the Computer Graphics community towards 3D scanning technologies based on active and passive solutions. The market counts several low-cost devices and software that are affordable for everyone, giving the possibility of acquiring almost any object or environment in an easy and fast way also to users with limited technical knowledge and expertise. The acquisition output is complex 3D data, usually point clouds, that encodes not only the spatial and visibility information but also the reflected radiance of each point samples. This data are used for different purposes, from the simple visualization to the more complex processing for the computation of dense triangular models or to infer more semantic information. Even if the presence of these devices in the market started some years ago and is well established, less attention has been given on the interaction with this noisy data, both on color and geometry, and on the use of all input attributes in real-time rendering to approximate global illumination effects. The goal of this thesis is to address these problems and to propose new algorithms to render and interact with raw harvested 3D data. The thesis presents four main contributions. The first one is a new algorithm for the interaction with temporal 3D data, where the same scene is acquired multiple times in different points in time, which improves the visualization and perception of the most significant geometric changes, limiting the perceptual distortion of the noisy data. The second contribution is a new method to approximate the global illumination effects in a point cloud using the ambient occlusion, proposing an algorithm tailored specifically for point clouds, where missing visibility information limits the results of the traditional approach. The third contribution is an easy algorithm to expand the dynamic range of the color data of a Low Dynamic Range point cloud, the usual output of the low-cost devices, using a single High Dynamic Range (HDR) photo of the same scene. Finally, the last contribution is a new Point-Based Global Illumination algorithm that, taking in input an HDR point cloud expanded with the previous method, can compute in real-time the relighting of a virtual object placed inside the cloud.
6-mar-2019
Italiano
Palma, Gianpaolo
Cignoni, Paolo
Università degli Studi di Pisa
File in questo prodotto:
File Dimensione Formato  
PhDReport.pdf

Open Access dal 09/03/2022

Tipologia: Altro materiale allegato
Dimensione 474.04 kB
Formato Adobe PDF
474.04 kB Adobe PDF Visualizza/Apri
thesisETD.pdf

Open Access dal 09/03/2022

Tipologia: Altro materiale allegato
Dimensione 11.2 MB
Formato Adobe PDF
11.2 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/134248
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-134248