Inverse problems serve as a general playground for analyzing many real-world applications. Typical examples are MRI, X-Ray CT, and image recovery. An inverse problem involves reconstructing an unknown source from limited and possibly distorted observations. The so-called data-driven techniques for solving inverse problems have become popular in recent years due to their effectiveness in many practical scenarios. Yet, few theoretical guarantees have been provided to date. This manuscript aims to bridge this gap in several key directions. Data driven approaches have gained attention since they require less prior knowledge. First, we propose and study a statistical machine learning approach, based on Empirical Risk Minimization, to determine the best regularization parameter given a finite set of examples. Our main contribution is a theoretical analysis, showing that, if the number of examples is big enough, this approach is optimal and adaptive to the noise level and the smoothness of the solution. We showcase the applicability of our framework to a broad class of inverse problems, including spectral regularization methods and sparsity-promoting norms. Numerical simulations further support and illustrate the theoretical findings. Moreover, we introduce a data-driven approach for constructing (firmly) nonexpansive operators. We present the utility of such a technique in the context of Plug-and-Play methods, where one proximal operator in classical algorithms such as Forward-Backward Splitting or the Chambolle--Pock primal-dual iteration is substituted by an operator that aims to be firmly nonexpansive. We establish a rigorous theoretical framework for learning such operators using an ERM approach. Further, we derive a solution that is ensured to be firmly nonexpansive and piecewise affine in the convex envelope of the training data. We prove that such an operator converges to the best empirical solution when increasing the number of points inside the envelope. Finally, we propose a practical implementation strategy and an application in the context of image denoising. Often, data-driven approaches require to deal with infinite-dimensional problems. Representer theorems, introduced in the context of kernel methods, and recently extended for studying general variational problems, can be applied for tackling this issue. These theorems characterize solutions of infinite-dimensional problems as a finite convex combination of a limited number of ``atoms''. In specific cases, these atoms can be shown to be the extreme points of a specific unit ball. In this setting, we contribute by characterizing the set of extreme points of the Lipzchitz unit ball in finite metric spaces. Consequently, a representer theorem in this setting will be provided, generalizing the so-called Minkowski-Carathéodory Theorem to infinite-dimensional spaces.
I problemi inversi sono il modello naturale per l'analisi di molte applicazioni del mondo reale. Esempi tipici sono la risonanza magnetica (MRI), la tomografia computerizzata a raggi X (CT) e problemi di recupero delle immagini. Un problema inverso consiste nel ricostruire una sorgente sconosciuta da osservazioni limitate e potenzialmente distorte. Le cosiddette tecniche ``data-driven'' per risolvere i problemi inversi sono diventate popolari negli ultimi anni grazie alla loro efficacia in molti scenari pratici. Tuttavia, ad oggi sono state fornite poche garanzie teoriche sul loro funzionamento. Questo manoscritto si propone di colmare queste lacune procedendo lungo diverse direzioni chiave. Gli approcci data-driven sono state oggetto di attenzione poiché richiedono meno conoscenze a priori. Nel primo lavoro, proponiamo e studiamo un approccio di Statistical Learning, basato su Empirical Risk Minimization (ERM), per determinare parametri a partire da esempi. Il nostro principale contributo è un'analisi teorica che dimostra come, se il numero di esempi è abbastanza grande, questo approccio sia ottimale ed adattattivo al livello di rumore e alla regolarità della soluzione. Mostriamo l'applicabilità del nostro framework a una vasta classe di problemi inversi, inclusi i metodi di regolarizzazione spettrale e le norme che promuovono sparsità. Simulazioni numeriche supportano e illustrano ulteriormente i risultati teorici. Inoltre, introduciamo un approccio data-driven per costruire operatori (fortemente) nonespansivi. Presentiamo l'utilità di tale tecnica nel contesto dei metodi Plug-and-Play, in cui un operatore prossimale in algoritmi classici come Forward-Backward Splitting o l'iterazione primale-duale di Chambolle--Pock viene sostituito da un operatore che mira ad essere fortemente nonespansivo. Stabiliamo un rigoroso quadro teorico per imparare tali operatori utilizzando un approccio ERM. Inoltre, deriviamo una soluzione che è garantita essere fortemente nonespansiva e affine a tratti nell'inviluppo convesso del training set. Dimostriamo che questo operatore converge alla migliore soluzione empirica aumentando il numero di punti all'interno dell'inviluppo. Infine, proponiamo una strategia di implementazione pratica e un'applicazione nel contesto dell'image denoising. Spesso, i problemi data-driven si scontrano con la sfida di affrontare problemi di dimensione infinita. I teoremi di rappresentazione, introdotti nel contesto dei metodi kernel e recentemente estesi allo studio di problemi variazionali generali, possono essere applicati per affrontare questa questione. Questi teoremi caratterizzano le soluzioni di problemi di dimensione infinita come una combinazione convessa finita di un numero limitato di ``atomi''. In casi specifici, si può dimostrate che questi atomi sono i punti estremali di una palla unitaria specifica. In questo contesto, contribuiamo caratterizzando l'insieme dei punti estremali della palla unitaria delle funzioni Lipschitziane in spazi metrici finiti. Di conseguenza, verrà fornito un teorema di rappresentazzione in questa impostazione, generalizzando il cosiddetto Teorema di Minkowski-Carathéodory a spazi di dimensione infinita.
Machine Learning Techniques for Inverse Problems
CHIRINOS RODRIGUEZ, JONATHAN EDUARDO
2024
Abstract
Inverse problems serve as a general playground for analyzing many real-world applications. Typical examples are MRI, X-Ray CT, and image recovery. An inverse problem involves reconstructing an unknown source from limited and possibly distorted observations. The so-called data-driven techniques for solving inverse problems have become popular in recent years due to their effectiveness in many practical scenarios. Yet, few theoretical guarantees have been provided to date. This manuscript aims to bridge this gap in several key directions. Data driven approaches have gained attention since they require less prior knowledge. First, we propose and study a statistical machine learning approach, based on Empirical Risk Minimization, to determine the best regularization parameter given a finite set of examples. Our main contribution is a theoretical analysis, showing that, if the number of examples is big enough, this approach is optimal and adaptive to the noise level and the smoothness of the solution. We showcase the applicability of our framework to a broad class of inverse problems, including spectral regularization methods and sparsity-promoting norms. Numerical simulations further support and illustrate the theoretical findings. Moreover, we introduce a data-driven approach for constructing (firmly) nonexpansive operators. We present the utility of such a technique in the context of Plug-and-Play methods, where one proximal operator in classical algorithms such as Forward-Backward Splitting or the Chambolle--Pock primal-dual iteration is substituted by an operator that aims to be firmly nonexpansive. We establish a rigorous theoretical framework for learning such operators using an ERM approach. Further, we derive a solution that is ensured to be firmly nonexpansive and piecewise affine in the convex envelope of the training data. We prove that such an operator converges to the best empirical solution when increasing the number of points inside the envelope. Finally, we propose a practical implementation strategy and an application in the context of image denoising. Often, data-driven approaches require to deal with infinite-dimensional problems. Representer theorems, introduced in the context of kernel methods, and recently extended for studying general variational problems, can be applied for tackling this issue. These theorems characterize solutions of infinite-dimensional problems as a finite convex combination of a limited number of ``atoms''. In specific cases, these atoms can be shown to be the extreme points of a specific unit ball. In this setting, we contribute by characterizing the set of extreme points of the Lipzchitz unit ball in finite metric spaces. Consequently, a representer theorem in this setting will be provided, generalizing the so-called Minkowski-Carathéodory Theorem to infinite-dimensional spaces.File | Dimensione | Formato | |
---|---|---|---|
phdunige_4965028.pdf
accesso aperto
Dimensione
2.67 MB
Formato
Adobe PDF
|
2.67 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/116272
URN:NBN:IT:UNIGE-116272