The implementation of Texture Analysis algorithms on embedded devices requires reducing the computational load and taking advantage of the hardware resources present on-board. In this dissertation, the properties of Convolutional Neural Network have been leveraged to train the network on a dataset of smaller image patches. Data-augmentation techniques have been applied in order to learn features invariant to geometric distortions, illumination changes and noise. Deployment of Convolutional models on resource-constraint required application of techniques used to reduce model complexity, such as pruning, quantization and model compression. In this work we assumed the Neural model to be a part of a more complex image processing pipeline that is implemented with traditional Image Processing or Computer Vision techniques. Using this approach, we are able to train compact models able to approximate one or multiple steps of more complex pipelines. Moreover, since the models are designed to be compatible with hardware accelerators for neural models, they can be used to improve the computational load and processing speed for the most performance-critical steps of the pipeline. With this work we aim at bridging the gap between modern data-driven learning models and the vast amount of existing legacy code still in use today.
Texture Analysis Applications for High Performance Embedded Digital Cameras
DE GIOIA, FRANCESCO
2022
Abstract
The implementation of Texture Analysis algorithms on embedded devices requires reducing the computational load and taking advantage of the hardware resources present on-board. In this dissertation, the properties of Convolutional Neural Network have been leveraged to train the network on a dataset of smaller image patches. Data-augmentation techniques have been applied in order to learn features invariant to geometric distortions, illumination changes and noise. Deployment of Convolutional models on resource-constraint required application of techniques used to reduce model complexity, such as pruning, quantization and model compression. In this work we assumed the Neural model to be a part of a more complex image processing pipeline that is implemented with traditional Image Processing or Computer Vision techniques. Using this approach, we are able to train compact models able to approximate one or multiple steps of more complex pipelines. Moreover, since the models are designed to be compatible with hardware accelerators for neural models, they can be used to improve the computational load and processing speed for the most performance-critical steps of the pipeline. With this work we aim at bridging the gap between modern data-driven learning models and the vast amount of existing legacy code still in use today.File | Dimensione | Formato | |
---|---|---|---|
ActivityReport.pdf
accesso aperto
Dimensione
149.86 kB
Formato
Adobe PDF
|
149.86 kB | Adobe PDF | Visualizza/Apri |
phd_deGioia2022.pdf
accesso aperto
Dimensione
15.03 MB
Formato
Adobe PDF
|
15.03 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/215763
URN:NBN:IT:UNIPI-215763