Sequence processing is fundamental to almost all our daily activities, from understanding conversations (sequences of words) to preparing lunch (a sequence of ordered actions) and even recalling the path taken to find a parked car. At the same time, we also need inferences to understand the world around us, for instance to make a comparison between two ice cream places, recalling how they were both related to the best ice cream place ever tried, or generalizing driving rules to streets never covered before. Those two apparently segregated domains, which seem to rely on completely different cognitive strategies, can be actually unified under the transitive inference paradigm, which requires to make inferences based on ranked orders (e.g. inferring that A > C, only knowing that A>B>C). In this work we will show how a simple linear model, implemented in a recurrent neural network, is able to solve this paradigm. We will observe the theoretically predicted representational geometry also in dorsal premotor cortical network of two rhesus monkeys, verifying several modeling predictions and explaining animals' behavior. In conclusion, we will extend modeling results to solve a more general inferential problem, taking advantage of contextual segregation, introducing an additional hierarchical layer to sequence representation.
From sequence processing to abstract cognition: transitive inference paradigm in artificial and cortical networks
RAGLIO, SOFIA
2025
Abstract
Sequence processing is fundamental to almost all our daily activities, from understanding conversations (sequences of words) to preparing lunch (a sequence of ordered actions) and even recalling the path taken to find a parked car. At the same time, we also need inferences to understand the world around us, for instance to make a comparison between two ice cream places, recalling how they were both related to the best ice cream place ever tried, or generalizing driving rules to streets never covered before. Those two apparently segregated domains, which seem to rely on completely different cognitive strategies, can be actually unified under the transitive inference paradigm, which requires to make inferences based on ranked orders (e.g. inferring that A > C, only knowing that A>B>C). In this work we will show how a simple linear model, implemented in a recurrent neural network, is able to solve this paradigm. We will observe the theoretically predicted representational geometry also in dorsal premotor cortical network of two rhesus monkeys, verifying several modeling predictions and explaining animals' behavior. In conclusion, we will extend modeling results to solve a more general inferential problem, taking advantage of contextual segregation, introducing an additional hierarchical layer to sequence representation.File | Dimensione | Formato | |
---|---|---|---|
Tesi_dottorato_Raglio.pdf
accesso aperto
Dimensione
28.49 MB
Formato
Adobe PDF
|
28.49 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/197667
URN:NBN:IT:UNIROMA1-197667