Deep Learning is considered the foundation of a new technological revolution, which is going to radically change human beings lives. No doubt that, in the following years, the use of deep learning techniques will bring to the development of outstanding applications that will totally change the way people interact with each other, interact with their environment, conceive work, travel, etc. However, while in the domain of applications deep learning is destined to revolutionize any human activity, it is extremely difficult to believe that these techniques, in an isolated way, can lead to the great dream of general artificial intelligence. Indeed, deep learning algorithms have shown to reach human-like performance in many isolated tasks, like image recognition or speech recognition, but they still struggle in integrating these single activities in a truly intelligent behaviour. Moreover, they are tremendously data-hungry: is recognizing a cat after having seen one million cats an intelligent behaviour? There is a clear need to look forward for more complex and general theories, where the outstanding deep learning techniques are not a final recipe but only an ingredient. We hope this thesis to be a little step in this direction. By taking inspiration from both symbolic artificial intelligence, strongly based on mathematical logics, and behavioural sciences, this thesis formulates and investigates a new theory about how intelligent behaviours can be the outcome of the seamlessly integration of two reasoning mechanisms: one subsymbolic, associative and fast; the other symbolic, cautious and slow. This integration naturally emerges from a principle of MiniMax Entropy, where the model of an intelligent system is asked to describe the environment it is exposed to by minimizing its internal confusion, yet keeping the maximum uncertainty about anything it is not able to explain. The theory nicely intercepts multiple AI fields like Statistical Relational Learning, Deep Learning, Constrained Optimization, Probabilistic Graphical Models, Neuro-Symbolic Integration, to name a few. The theory is extremely general and multiple models can be described in terms of MiniMax Entropy models. Some practical instances of the theory are proposed and empirically investigated, showing competitive results in many different learning and reasoning tasks w.r.t. other specific state-of-the-art approaches. Moreover, actual programming frameworks, inspired by the theory, are proposed and provided to the community for future investigations.

Bridging symbolic and subsymbolic reasoning with MiniMax Entropy models.

2020

Abstract

Deep Learning is considered the foundation of a new technological revolution, which is going to radically change human beings lives. No doubt that, in the following years, the use of deep learning techniques will bring to the development of outstanding applications that will totally change the way people interact with each other, interact with their environment, conceive work, travel, etc. However, while in the domain of applications deep learning is destined to revolutionize any human activity, it is extremely difficult to believe that these techniques, in an isolated way, can lead to the great dream of general artificial intelligence. Indeed, deep learning algorithms have shown to reach human-like performance in many isolated tasks, like image recognition or speech recognition, but they still struggle in integrating these single activities in a truly intelligent behaviour. Moreover, they are tremendously data-hungry: is recognizing a cat after having seen one million cats an intelligent behaviour? There is a clear need to look forward for more complex and general theories, where the outstanding deep learning techniques are not a final recipe but only an ingredient. We hope this thesis to be a little step in this direction. By taking inspiration from both symbolic artificial intelligence, strongly based on mathematical logics, and behavioural sciences, this thesis formulates and investigates a new theory about how intelligent behaviours can be the outcome of the seamlessly integration of two reasoning mechanisms: one subsymbolic, associative and fast; the other symbolic, cautious and slow. This integration naturally emerges from a principle of MiniMax Entropy, where the model of an intelligent system is asked to describe the environment it is exposed to by minimizing its internal confusion, yet keeping the maximum uncertainty about anything it is not able to explain. The theory nicely intercepts multiple AI fields like Statistical Relational Learning, Deep Learning, Constrained Optimization, Probabilistic Graphical Models, Neuro-Symbolic Integration, to name a few. The theory is extremely general and multiple models can be described in terms of MiniMax Entropy models. Some practical instances of the theory are proposed and empirically investigated, showing competitive results in many different learning and reasoning tasks w.r.t. other specific state-of-the-art approaches. Moreover, actual programming frameworks, inspired by the theory, are proposed and provided to the community for future investigations.
2020
Inglese
Marco Gori
Università degli Studi di Firenze
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/146555
Il codice NBN di questa tesi è URN:NBN:IT:UNIFI-146555