Event recognition is one of multimedia applications that has been gaining ground recently. However, it has received scarce attention relatively to other applications. The methodologies presented hereby are aimed at event-based analysis of multimedia content, which is achieved from three perspectives, namely (i) event recognition in single images, (ii) event recognition in personal photo collections and (iii) fusion of social media information and satellite imagery for natural disaster detection. A close look at the relevant literature suggests that more attention has been paid to event recognition in single images. Event recognition in personal photo collection has also received a number of interesting solutions. Natural disaster detection in images from social media and satellite imagery, however, is relatively new. As a matter of fact, many issues remain unsolved mostly due to the heterogeneity, multi-modality and the unstructured nature of the data. In this dissertation, such open problems are presented and analyzed. New perspectives and approaches are suggested, alongside a detailed experimental validation and analysis. In details, our contribution is multi-fold. On the one hand, we aim at demonstrating that the fusion of different feature extraction and classification strategies can outperform the single methods by jointly exploiting the learning capabilities of individual deep models. On the other side, we analyze the importance of event-salient objects and local image regions in event recognition. We also present a novel framework for event recognition in personal photo collections. Moreover, we also present our system JORD, and our Convolutional Neural Networks (CNNs) and Generative Adversarial Network (GAN) based fusion of social media and satellite images for natural disaster detection. A thorough experimental analysis of each proposed solution is provided on benchmark datasets along with the potential direction of future work.
Events based Multimedia Indexing and Retrieval
Ahmad, Kashif
2017
Abstract
Event recognition is one of multimedia applications that has been gaining ground recently. However, it has received scarce attention relatively to other applications. The methodologies presented hereby are aimed at event-based analysis of multimedia content, which is achieved from three perspectives, namely (i) event recognition in single images, (ii) event recognition in personal photo collections and (iii) fusion of social media information and satellite imagery for natural disaster detection. A close look at the relevant literature suggests that more attention has been paid to event recognition in single images. Event recognition in personal photo collection has also received a number of interesting solutions. Natural disaster detection in images from social media and satellite imagery, however, is relatively new. As a matter of fact, many issues remain unsolved mostly due to the heterogeneity, multi-modality and the unstructured nature of the data. In this dissertation, such open problems are presented and analyzed. New perspectives and approaches are suggested, alongside a detailed experimental validation and analysis. In details, our contribution is multi-fold. On the one hand, we aim at demonstrating that the fusion of different feature extraction and classification strategies can outperform the single methods by jointly exploiting the learning capabilities of individual deep models. On the other side, we analyze the importance of event-salient objects and local image regions in event recognition. We also present a novel framework for event recognition in personal photo collections. Moreover, we also present our system JORD, and our Convolutional Neural Networks (CNNs) and Generative Adversarial Network (GAN) based fusion of social media and satellite images for natural disaster detection. A thorough experimental analysis of each proposed solution is provided on benchmark datasets along with the potential direction of future work.File | Dimensione | Formato | |
---|---|---|---|
Disclaimer.pdf
accesso solo da BNCF e BNCR
Dimensione
1.02 MB
Formato
Adobe PDF
|
1.02 MB | Adobe PDF | |
My_PhD_Thesis.pdf
accesso aperto
Dimensione
40.85 MB
Formato
Adobe PDF
|
40.85 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/106934
URN:NBN:IT:UNITN-106934