The present work deals with the problem of the semantic complexity in natural language, proposing an hypothesis based on some features of natural language sentences that determine their difficulty for human understanding. We aim at introducing a general framework for semantic complexity, in which the processing difficulty depends on the interaction between two components: a Memory component, which is responsible for the storage of corpus-extracted event representations, and a Unification component, which is responsible for combining the units stored in Memory into more complex structures. We propose that semantic complexity depends on the difficulty of building a semantic representation of the event or the situation conveyed by a sentence, that can be either retrieved directly from the semantic memory or built dynamically by solving the constraints included in the stored representations. In order to test our intuitions, we built a Distributional Semantic Model to compute a compositional cost for the sentence unification process. Our tests on several psycholinguistic datasets showed that our model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and of logical metonymies.

Explaining Complexity in Human Language Processing: A Distributional Semantic Model

CHERSONI, EMMANUELE
2018

Abstract

The present work deals with the problem of the semantic complexity in natural language, proposing an hypothesis based on some features of natural language sentences that determine their difficulty for human understanding. We aim at introducing a general framework for semantic complexity, in which the processing difficulty depends on the interaction between two components: a Memory component, which is responsible for the storage of corpus-extracted event representations, and a Unification component, which is responsible for combining the units stored in Memory into more complex structures. We propose that semantic complexity depends on the difficulty of building a semantic representation of the event or the situation conveyed by a sentence, that can be either retrieved directly from the semantic memory or built dynamically by solving the constraints included in the stored representations. In order to test our intuitions, we built a Distributional Semantic Model to compute a compositional cost for the sentence unification process. Our tests on several psycholinguistic datasets showed that our model is able to account for semantic phenomena such as the context-sensitive update of argument expectations and of logical metonymies.
12-lug-2018
Italiano
computational psycholinguistics
distributional semantics
N400
semantic complexity
semantic memory
sentence comprehension
thematic fit
unification
Blache, Philippe
Lenci, Alessandro
File in questo prodotto:
File Dimensione Formato  
report_fine_corso.pdf

non disponibili

Dimensione 178.29 kB
Formato Adobe PDF
178.29 kB Adobe PDF
thesis_chersoni.pdf

accesso aperto

Dimensione 5.98 MB
Formato Adobe PDF
5.98 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/133482
Il codice NBN di questa tesi è URN:NBN:IT:UNIPI-133482