Human semantic memory emerges from the continuous interplay between linguistic and sensorimotor experiences. This thesis investigates how experiential priors derived from language, vision, and touch jointly shape conceptual knowledge, mainly relying on the application of computational models as proxies for experience, and behavioral experiments across sighted and blind individuals. Three main research lines are presented. First, we show that the absence of visual experience enhances sensitivity to linguistic distributional patterns, suggesting that language compensates for missing perceptual traces. Second, we found evidence for notation-dependent differences in symbolic numerical processing, which can be explained in light of subtle differences between the corresponding linguistic priors. Third, we examine the domain of continuous magnitudes, showing that linguistic, visual, and tactile inputs flexibly converge in the organization of size representations – with language compensating the absence of perceptual priors (as in the case of blind individuals), and tactile experience modulating the informativeness of the other modalities. Across these studies, computational models provided approximations of experiential history, allowing us to test how meaning arises from multimodal sources with a data-driven approach. Collectively, the findings support an integrated, experience-based account of semantic memory, in which linguistic and perceptual inputs dynamically interact to scaffold human understanding.

Human semantic memory emerges from the continuous interplay between linguistic and sensorimotor experiences. This thesis investigates how experiential priors derived from language, vision, and touch jointly shape conceptual knowledge, mainly relying on the application of computational models as proxies for experience, and behavioral experiments across sighted and blind individuals. Three main research lines are presented. First, we show that the absence of visual experience enhances sensitivity to linguistic distributional patterns, suggesting that language compensates for missing perceptual traces. Second, we found evidence for notation-dependent differences in symbolic numerical processing, which can be explained in light of subtle differences between the corresponding linguistic priors. Third, we examine the domain of continuous magnitudes, showing that linguistic, visual, and tactile inputs flexibly converge in the organization of size representations – with language compensating the absence of perceptual priors (as in the case of blind individuals), and tactile experience modulating the informativeness of the other modalities. Across these studies, computational models provided approximations of experiential history, allowing us to test how meaning arises from multimodal sources with a data-driven approach. Collectively, the findings support an integrated, experience-based account of semantic memory, in which linguistic and perceptual inputs dynamically interact to scaffold human understanding.

From Experience to Meaning: The Role of Linguistic and Perceptual Priors in Semantic Memory

ANCERESI, GIORGIA
2026

Abstract

Human semantic memory emerges from the continuous interplay between linguistic and sensorimotor experiences. This thesis investigates how experiential priors derived from language, vision, and touch jointly shape conceptual knowledge, mainly relying on the application of computational models as proxies for experience, and behavioral experiments across sighted and blind individuals. Three main research lines are presented. First, we show that the absence of visual experience enhances sensitivity to linguistic distributional patterns, suggesting that language compensates for missing perceptual traces. Second, we found evidence for notation-dependent differences in symbolic numerical processing, which can be explained in light of subtle differences between the corresponding linguistic priors. Third, we examine the domain of continuous magnitudes, showing that linguistic, visual, and tactile inputs flexibly converge in the organization of size representations – with language compensating the absence of perceptual priors (as in the case of blind individuals), and tactile experience modulating the informativeness of the other modalities. Across these studies, computational models provided approximations of experiential history, allowing us to test how meaning arises from multimodal sources with a data-driven approach. Collectively, the findings support an integrated, experience-based account of semantic memory, in which linguistic and perceptual inputs dynamically interact to scaffold human understanding.
20-feb-2026
Inglese
Human semantic memory emerges from the continuous interplay between linguistic and sensorimotor experiences. This thesis investigates how experiential priors derived from language, vision, and touch jointly shape conceptual knowledge, mainly relying on the application of computational models as proxies for experience, and behavioral experiments across sighted and blind individuals. Three main research lines are presented. First, we show that the absence of visual experience enhances sensitivity to linguistic distributional patterns, suggesting that language compensates for missing perceptual traces. Second, we found evidence for notation-dependent differences in symbolic numerical processing, which can be explained in light of subtle differences between the corresponding linguistic priors. Third, we examine the domain of continuous magnitudes, showing that linguistic, visual, and tactile inputs flexibly converge in the organization of size representations – with language compensating the absence of perceptual priors (as in the case of blind individuals), and tactile experience modulating the informativeness of the other modalities. Across these studies, computational models provided approximations of experiential history, allowing us to test how meaning arises from multimodal sources with a data-driven approach. Collectively, the findings support an integrated, experience-based account of semantic memory, in which linguistic and perceptual inputs dynamically interact to scaffold human understanding.
VECCHI, TOMASO ELIA
Università degli studi di Pavia
File in questo prodotto:
File Dimensione Formato  
AnceresiGiorgia PhD Thesis REV.pdf

embargo fino al 01/09/2027

Licenza: Tutti i diritti riservati
Dimensione 2.41 MB
Formato Adobe PDF
2.41 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/357686
Il codice NBN di questa tesi è URN:NBN:IT:UNIPV-357686