In the past 25 years, Functional Magnetic Resonance Imaging (fMRI) has been extensively deployed as a tool to correlate changes in brain metabolism and behavioral manipulations through the Blood Oxygen Level-Dependent (BOLD; Ogawa et al., 1990) signal, an in-vivo measurement attributed to the coupling between neural activity and changes in cerebral blood flow, volume and oxygenation in the parenchyma and surrounding vasculature of the brain (neurovascular coupling; Villringer & Dimagl, 1994). In this study, we investigated the neurofunctional correlates of language sounds through BOLD fMRI by looking at how low-level acoustic and motor linguistic information is represented regionally within the speech network; moreover, we explored the possible sharing of a neural code between regions specializing in different aspects of speech by testing an intrinsically motor-acoustic phonological model based on physical properties (spectral structure; motor properties). To address our research questions of whether, how and where phonological information is represented in the brain, we attempted multivariate classification of the complete Italian vocalic system and a matching set of pure tones, by means of a searchlight-based classifier of fMRI data collected during a multi-modal paradigm with tonotopy, vowel listening, covert (imaged) and overt (articulated) speech tasks; most importantly, for the first time phonological discrimination during imagery of speech sounds was attempted. Modality-specific, within-searchlight Rank tests with similarity measurements revealed a set of left-lateralized regions of interest (ROIs) comprising the ventral prefrontal and superior temporal areas where phonological information is represented, though with poor inter-modal overlap. After information-content measures were implemented within each ROI, Principal Component Analysis (PCA) revealed that the organization and representation of phonological information within the speech network are regionally- and modality-shared to a relative extent.

The functional neuroanatomy of the speech network: a shared phonological neural code for heard, imaged and spoken phonemes.

2016

Abstract

In the past 25 years, Functional Magnetic Resonance Imaging (fMRI) has been extensively deployed as a tool to correlate changes in brain metabolism and behavioral manipulations through the Blood Oxygen Level-Dependent (BOLD; Ogawa et al., 1990) signal, an in-vivo measurement attributed to the coupling between neural activity and changes in cerebral blood flow, volume and oxygenation in the parenchyma and surrounding vasculature of the brain (neurovascular coupling; Villringer & Dimagl, 1994). In this study, we investigated the neurofunctional correlates of language sounds through BOLD fMRI by looking at how low-level acoustic and motor linguistic information is represented regionally within the speech network; moreover, we explored the possible sharing of a neural code between regions specializing in different aspects of speech by testing an intrinsically motor-acoustic phonological model based on physical properties (spectral structure; motor properties). To address our research questions of whether, how and where phonological information is represented in the brain, we attempted multivariate classification of the complete Italian vocalic system and a matching set of pure tones, by means of a searchlight-based classifier of fMRI data collected during a multi-modal paradigm with tonotopy, vowel listening, covert (imaged) and overt (articulated) speech tasks; most importantly, for the first time phonological discrimination during imagery of speech sounds was attempted. Modality-specific, within-searchlight Rank tests with similarity measurements revealed a set of left-lateralized regions of interest (ROIs) comprising the ventral prefrontal and superior temporal areas where phonological information is represented, though with poor inter-modal overlap. After information-content measures were implemented within each ROI, Principal Component Analysis (PCA) revealed that the organization and representation of phonological information within the speech network are regionally- and modality-shared to a relative extent.
24-feb-2016
Italiano
Pietrini, Pietro
Marotta, Giovanna
Università degli Studi di Pisa
File in questo prodotto:
File Dimensione Formato  
tesi_etd_final.pdf

Open Access dal 03/03/2019

Tipologia: Altro materiale allegato
Dimensione 14.13 MB
Formato Adobe PDF
14.13 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/139812
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-139812