This work undertakes a linguistic and philosophical examination of the representation and training techniques of Language Models. The thesis is divided into three main sections. The first section examines the connectionist paradigm for content representation and its effectiveness in handling simple tasks through neural networks. In the following part, it delves into critiques of the competing symbolic paradigm. As a result, the section stresses the fundamental incompatibility between these two approaches and highlights the representational strategies that neural networks exhibit when dealing with language, despite the lack of a definitive theory of meaning. The second section identifies structural linguistics, as proposed by Saussure in the Course in General Linguistics, as a theory of meaning consistent with connectionist positions. Structuralism defines meaning as the oppositional value of a sign within a system. According to this semantic theory, denotation does not constitute the condition of possibility of meanings, as in the symbolic paradigm, but rather represents one of the multiple possibilities of use of a sign. In this respect, connectionist models seem to possess the essential attributes to represent the content plane of natural historical languages. The third section presents an analysis of recent Language Models (Word2Vec and Transformers), with a particular focus on the vector representations obtained through language modeling. These representations are found to be consistent with the semantic principles endorsed by structural linguistics. This observation suggests that Language Models are indeed capable of representing the meanings of signs, which are achieved by modeling the content plane of a historocal language.
Il lavoro di tesi propone un’analisi linguistico-filosofica delle tecniche di rappresentazione e addestramento dei Language Model. Il lavoro è articolato in tre sezioni. Nella prima sezione viene analizzato il paradigma connessionista per la rappresentazione di contenuti e la risoluzione di task complessi tramite reti neurali. Parallelamente, vengono presentate le critiche poste dal concorrente paradigma simbolico. La sezione si chiude stabilendo l’incompatibilità fra i due approcci ed evidenziando le potenzialità rappresentative esibite dalle reti neurali in ambito linguistico, pur in assenza di una chiara teoria del significato. La seconda sezione individua nella linguistica strutturale (nella sua versione demauriana) un paradigma del significato in accordo con le posizioni connessioniste. Lo strutturalismo definisce il significato come il valore oppositivo di un segno in un sistema. Secondo questa teoria semantica, la denotazione non costituisce (come nel paradigma simbolico) la condizione di possibilità dei significati, ma solo una delle molteplici possibilità d’uso di un segno. In questo senso, i modelli connessionisti sembrano possedere le caratteristiche necessarie per rappresentare il piano del contenuto delle lingue storico naturali. Nella terza sezione viene condotta un’analisi dei recenti Language Model (Word2Vec e Transformers), incentrata sulle rappresentazioni vettoriali apprese tramite il language modeling. Queste rappresentazioni risultano in accordo con le posizioni semantiche sostenute dalla linguistica strutturale. In questo modo è possibile riconoscere effettivamente ai modelli la capacità di rappresentare i significati dei segni delle lingue storico naturali, tramite la modellizzazione di uno stato di lingua.
Apprendere il significato. Un’analisi linguistica delle tecniche di rappresentazione e addestramento dei Language Model
CAPONE, LUCA
2024
Abstract
This work undertakes a linguistic and philosophical examination of the representation and training techniques of Language Models. The thesis is divided into three main sections. The first section examines the connectionist paradigm for content representation and its effectiveness in handling simple tasks through neural networks. In the following part, it delves into critiques of the competing symbolic paradigm. As a result, the section stresses the fundamental incompatibility between these two approaches and highlights the representational strategies that neural networks exhibit when dealing with language, despite the lack of a definitive theory of meaning. The second section identifies structural linguistics, as proposed by Saussure in the Course in General Linguistics, as a theory of meaning consistent with connectionist positions. Structuralism defines meaning as the oppositional value of a sign within a system. According to this semantic theory, denotation does not constitute the condition of possibility of meanings, as in the symbolic paradigm, but rather represents one of the multiple possibilities of use of a sign. In this respect, connectionist models seem to possess the essential attributes to represent the content plane of natural historical languages. The third section presents an analysis of recent Language Models (Word2Vec and Transformers), with a particular focus on the vector representations obtained through language modeling. These representations are found to be consistent with the semantic principles endorsed by structural linguistics. This observation suggests that Language Models are indeed capable of representing the meanings of signs, which are achieved by modeling the content plane of a historocal language.File | Dimensione | Formato | |
---|---|---|---|
Capone Luca tesi di dottorato.pdf
embargo fino al 01/04/2027
Dimensione
11.69 MB
Formato
Adobe PDF
|
11.69 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/122884
URN:NBN:IT:UNICAMPUS-122884