The so-called Deep Image Prior (DIP) approach is an unsupervised deep learning methodology which has gained great interest in recent years due to its effectiveness in tackling imaging problems, such as denoising, inpainting, and super-resolution, without the need for extensive pre-training. A key limitation of DIP is its tendency to overfit noise if optimization runs too long, leading to the so called semiconvergence effect, where the model begins to capture noise rather than improve image quality. Although DIP has proven effective in tasks like denoising and super-resolution, its potential in areas like blind deconvolution and segmentation remains under-researched. The aim of this thesis is to contribute to the DIP framework by exploring new possibilities for applying this approach to blind deconvolution and segmentation tasks, as well as developing efficient early-stopping techniques that automatically halt network optimization once an optimal reconstruction is achieved. For addressing challenges in blind deconvolution and segmentation, the original DIP framework has been extended to the variational models specifically tailored to capture the unique structural and statistical characteristics of these imaging applications, enhancing DIP’s adaptability to complex image restoration tasks. On the other hand, two early stopping strategies have been developed, each based on a distinct approach. The first is based on a Neural Architecture Search to generate optimal hyperparameter configurations for the neural network used in Deep Image Prior, helping to prevent the semiconvergence effect. The second strategy relies on a modified version of the BRISQUE metric, a no-reference image quality measure, to track the behavior of the PSNR curve produced by Deep Image Prior, without requiring the ground truth image.
Novelties in the Deep Image Prior framework for image restoration and segmentation
Ambra, Catozzi
2025
Abstract
The so-called Deep Image Prior (DIP) approach is an unsupervised deep learning methodology which has gained great interest in recent years due to its effectiveness in tackling imaging problems, such as denoising, inpainting, and super-resolution, without the need for extensive pre-training. A key limitation of DIP is its tendency to overfit noise if optimization runs too long, leading to the so called semiconvergence effect, where the model begins to capture noise rather than improve image quality. Although DIP has proven effective in tasks like denoising and super-resolution, its potential in areas like blind deconvolution and segmentation remains under-researched. The aim of this thesis is to contribute to the DIP framework by exploring new possibilities for applying this approach to blind deconvolution and segmentation tasks, as well as developing efficient early-stopping techniques that automatically halt network optimization once an optimal reconstruction is achieved. For addressing challenges in blind deconvolution and segmentation, the original DIP framework has been extended to the variational models specifically tailored to capture the unique structural and statistical characteristics of these imaging applications, enhancing DIP’s adaptability to complex image restoration tasks. On the other hand, two early stopping strategies have been developed, each based on a distinct approach. The first is based on a Neural Architecture Search to generate optimal hyperparameter configurations for the neural network used in Deep Image Prior, helping to prevent the semiconvergence effect. The second strategy relies on a modified version of the BRISQUE metric, a no-reference image quality measure, to track the behavior of the PSNR curve produced by Deep Image Prior, without requiring the ground truth image.File | Dimensione | Formato | |
---|---|---|---|
PhDThesis_Ambra_Catozzi_2025.pdf
embargo fino al 01/04/2026
Dimensione
43.8 MB
Formato
Adobe PDF
|
43.8 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/213350
URN:NBN:IT:UNIPR-213350