In this thesis we investigate two distinct regularizing approaches for solving inverse problems. The first approach involves assuming that the unknown belongs to a manifold represented by a deep generative model. In particular, we use multiresolution analysis tools to design a generative neural network in the continuous setting, where the model’s output belongs to an infinite-dimensional function space and we study the injectivity of this generator, as it is a necessary condition for its range to represent a manifold. Furthermore, we derive Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns that belong to the manifold generated by our generative model. However, this generative model can represent only manifolds that admit a global parameterization. To address this limitation and represent manifolds of arbitrary topology, we propose to learn a mixture model of variational autoencoders, where each encoder-decoder pair represents one chart of a manifold. Moreover, we develop a Riemannian gradient descent algorithm on the learned manifold to solve inverse problems, with the constraint that the unknowns belong to the manifold. The second approach introduces a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modeling sparsity with respect to a generic basis. Within this context, we develop a neural network, serving as the Bayesian estimator for linear inverse problems and we propose both a supervised and an unsupervised training strategies to estimate the parameters of this network. We demonstrate the performance of both approaches for denoising and deblurring problem with 1D or 2D datasets. We also consider the electrical impedance tomography problem for the mixture model of variational autoencoders technique.

Manifold Learning and Sparsity Priors for Inverse Problems

SCIUTTO, SILVIA
2024

Abstract

In this thesis we investigate two distinct regularizing approaches for solving inverse problems. The first approach involves assuming that the unknown belongs to a manifold represented by a deep generative model. In particular, we use multiresolution analysis tools to design a generative neural network in the continuous setting, where the model’s output belongs to an infinite-dimensional function space and we study the injectivity of this generator, as it is a necessary condition for its range to represent a manifold. Furthermore, we derive Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns that belong to the manifold generated by our generative model. However, this generative model can represent only manifolds that admit a global parameterization. To address this limitation and represent manifolds of arbitrary topology, we propose to learn a mixture model of variational autoencoders, where each encoder-decoder pair represents one chart of a manifold. Moreover, we develop a Riemannian gradient descent algorithm on the learned manifold to solve inverse problems, with the constraint that the unknowns belong to the manifold. The second approach introduces a probabilistic sparsity prior formulated as a mixture of degenerate Gaussians, capable of modeling sparsity with respect to a generic basis. Within this context, we develop a neural network, serving as the Bayesian estimator for linear inverse problems and we propose both a supervised and an unsupervised training strategies to estimate the parameters of this network. We demonstrate the performance of both approaches for denoising and deblurring problem with 1D or 2D datasets. We also consider the electrical impedance tomography problem for the mixture model of variational autoencoders technique.
27-mag-2024
Inglese
ALBERTI, GIOVANNI
SANTACESARIA, MATTEO
VIGNI, STEFANO
Università degli studi di Genova
File in questo prodotto:
File Dimensione Formato  
phdunige_4185534.pdf

accesso aperto

Dimensione 20.21 MB
Formato Adobe PDF
20.21 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/161286
Il codice NBN di questa tesi è URN:NBN:IT:UNIGE-161286