Localizing and categorizing objects within an image is one of the fundamental and long-standing challenges in Computer Vision, extensively investigated since the early days of the field in the 1970s. Semantic segmentation tackles this problem by assigning a label to every pixel in the image, thereby partitioning it into coherent regions that correspond to the underlying object categories. However, fully supervised deep-learning approaches rely on costly and time-consuming pixel-level annotations drawn by human annotators. The high effort required to produce such data limits their availability across diverse datasets and domains, thereby restricting the scalability of supervised methods and confining them to a finite set of available categories, thus motivating the need for weaker forms of supervision. In recent years, the availability of large-scale image–caption datasets and the advent of vision–language models have transformed the field, enabling the joint learning of visual and textual representations. This progress has motivated researchers to extend multimodal understanding to a fine-grained, pixel-level form, giving rise to open-vocabulary semantic segmentation, a paradigm that allows models to segment arbitrary categories described by free-form text queries, thus overcoming the constraints of fixed taxonomies. This capability opens new opportunities in applications such as robotic navigation and manipulation, autonomous driving, augmented and mixed reality, and image editing. Nevertheless, transferring the global supervision provided by image captions into consistent pixel-level multimodal representations remains an open challenge. In parallel, self-supervised learning has emerged as a paradigm in which models learn rich visual representations from unlabeled data by using intrinsic supervisory signals from the images themselves. This approach has produced general-purpose visual features that can be transferred across tasks and domains without explicit supervision. In particular, Vision Transformer–based models have demonstrated the ability to encode semantics not only globally but also locally, at the level of small image patches. This dissertation investigates methods to combine the complementary strengths of vision–language and self-supervised models to achieve open-vocabulary segmentation. We propose techniques that leverage localized semantics from self-supervised representations and their integration with text through visual prototypes and contrastive learning, demonstrating that self-supervised features naturally align with linguistic concepts when properly extracted. Building upon these insights, we extend open-vocabulary perception to embodied settings, designing robotic agents capable of object-driven navigation in photorealistic environments through instance matching and localization guided by both visual and textual cues. Finally, we explore how Multimodal Large Language Models can reason about visual grounding and referring tasks, analyzing existing approaches, tasks, and datasets that connect high-level reasoning with pixel-level localization.
La localizzazione e la categorizzazione degli oggetti presenti in un’immagine rappresentano una delle sfide fondamentali e di lunga data della Visione Artificiale, oggetto di studio sin dagli albori della disciplina negli anni Settanta. La segmentazione semantica affronta questo problema assegnando una categoria a ciascun pixel dell’immagine, suddividendola così in regioni coerenti che corrispondono alle categorie degli oggetti coperti. Tuttavia, gli approcci di deep learning completamente supervisionati richiedono annotazioni a livello di pixel, tracciate manualmente da annotatori umani. L’elevato costo e il notevole dispendio di tempo necessari per produrre tali dati ne limitano la disponibilità per domini e dataset diversi, riducendo la scalabilità dei metodi supervisionati, confinandoli a un insieme finito di categorie note e motivando la ricerca di forme di supervisione più deboli. Negli ultimi anni, la disponibilità di dataset su larga scala costituiti da coppie immagine–didascalia e l’avvento dei modelli linguistici-visivi hanno trasformato il campo, consentendo l’apprendimento congiunto di rappresentazioni visive e testuali. Questo ha portato la comunità scientifica a estendere la comprensione multimodale verso una forma più localizzata, a livello di pixel, dando origine al paradigma della segmentazione semantica open-vocabulary. Tale paradigma permette ai modelli di segmentare categorie arbitrarie descritte testualmente, superando così i vincoli imposti dalle tassonomie predefinite. Questo nuovo approccio apre la strada a numerose applicazioni, come la navigazione e la manipolazione robotica, la guida autonoma, la realtà aumentata e mista e l’editing di immagini. Nonostante ciò, la trasformazione della supervisione globale fornita dalle didascalie in rappresentazioni multimodali coerenti a livello di pixel rimane tuttora un problema irrisolto. In parallelo, l’apprendimento auto-supervisionato è emerso come paradigma in cui i modelli apprendono ricche rappresentazioni visive a partire da dati non annotati, sfruttando segnali di supervisione intrinseci alle immagini stesse. Questo approccio ha portato allo sviluppo di rappresentazioni visuali universali, che possono essere trasferite per affrontare problemi e domini differenti anche in assenza di una supervisione esplicita. In particolare, i modelli basati su Vision Transformer hanno dimostrato la capacità di codificare informazioni semantiche non solo a livello globale, ma anche localmente, a livello di piccole porzioni dell'immagine chiamate patch. Questa tesi di dottorato indaga metodi per combinare i punti di forza complementari dei modelli linguistici-visivi e di quelli auto-supervisionati per la segmentazione open-vocabulary. Vengono proposte tecniche che sfruttano la semantica locale appresa dai modelli auto-supervisionati e la loro integrazione con il linguaggio mediante prototipi visuali e apprendimento contrastivo, dimostrando che tali rappresentazioni sono naturalmente allineate ai concetti linguistici quando estratte opportunamente. Su queste basi, estendiamo la percezione open-vocabulary a sistemi di robotica intelligente, progettando agenti in grado di eseguire attività di navigazione orientata al trovare oggetti in ambienti fotorealistici tramite riconoscimento e localizzazione di istanze guidata da referenze visive e testuali. Infine, esploriamo come i Multimodal Large Language Models possano ragionare su compiti di visual grounding e referring, analizzando approcci, problemi e dataset che collegano il ragionamento ad alto livello con la localizzazione a livello di pixel.
Integrare Visione e Linguaggio per la Segmentazione Open-Vocabulary: dai Modelli Auto-Supervisionati al Ragionamento Multimodale
BARSELLOTTI, LUCA
2026
Abstract
Localizing and categorizing objects within an image is one of the fundamental and long-standing challenges in Computer Vision, extensively investigated since the early days of the field in the 1970s. Semantic segmentation tackles this problem by assigning a label to every pixel in the image, thereby partitioning it into coherent regions that correspond to the underlying object categories. However, fully supervised deep-learning approaches rely on costly and time-consuming pixel-level annotations drawn by human annotators. The high effort required to produce such data limits their availability across diverse datasets and domains, thereby restricting the scalability of supervised methods and confining them to a finite set of available categories, thus motivating the need for weaker forms of supervision. In recent years, the availability of large-scale image–caption datasets and the advent of vision–language models have transformed the field, enabling the joint learning of visual and textual representations. This progress has motivated researchers to extend multimodal understanding to a fine-grained, pixel-level form, giving rise to open-vocabulary semantic segmentation, a paradigm that allows models to segment arbitrary categories described by free-form text queries, thus overcoming the constraints of fixed taxonomies. This capability opens new opportunities in applications such as robotic navigation and manipulation, autonomous driving, augmented and mixed reality, and image editing. Nevertheless, transferring the global supervision provided by image captions into consistent pixel-level multimodal representations remains an open challenge. In parallel, self-supervised learning has emerged as a paradigm in which models learn rich visual representations from unlabeled data by using intrinsic supervisory signals from the images themselves. This approach has produced general-purpose visual features that can be transferred across tasks and domains without explicit supervision. In particular, Vision Transformer–based models have demonstrated the ability to encode semantics not only globally but also locally, at the level of small image patches. This dissertation investigates methods to combine the complementary strengths of vision–language and self-supervised models to achieve open-vocabulary segmentation. We propose techniques that leverage localized semantics from self-supervised representations and their integration with text through visual prototypes and contrastive learning, demonstrating that self-supervised features naturally align with linguistic concepts when properly extracted. Building upon these insights, we extend open-vocabulary perception to embodied settings, designing robotic agents capable of object-driven navigation in photorealistic environments through instance matching and localization guided by both visual and textual cues. Finally, we explore how Multimodal Large Language Models can reason about visual grounding and referring tasks, analyzing existing approaches, tasks, and datasets that connect high-level reasoning with pixel-level localization.| File | Dimensione | Formato | |
|---|---|---|---|
|
Barsellotti.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
14.22 MB
Formato
Adobe PDF
|
14.22 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/365008
URN:NBN:IT:UNIMORE-365008