Artificial intelligence (AI) based on neural networks (NNs) has become a cornerstone of modern society, enabling efficient pattern recognition and automation across a wide range of applications. However, traditional AI approaches relying on centralized cloud computing, where large volumes of user data are processed through pre-trained NNs, raise environmental and privacy concerns due to their high computational and communication demands. The growing number of data centers, now exceeding 1,000 TWh of yearly electricity consumption, entails massive energy and water usage, further amplified by the training of increasingly complex NNs. Beyond algorithmic optimization, these considerations highlight the need for a paradigm shift toward hardware-level solutions, enabling ultra-low-power NN training and data analysis directly on edge devices, thus reducing dependence on energy-hungry cloud infrastructures while preserving user privacy. In this context, bio-inspired hardware approaches, taking inspiration from the brain’s ability to perform complex computations with outstanding energy efficiency, offer a promising path. Depending on the biological mechanisms emulated and the abstraction level adopted, different circuital strategies can be explored. Key enablers for overcoming the limitations of conventional architectures include the merging of computation and memory (Logic-in-Memory, LiM) and the emulation of spiking activity and synaptic plasticity in biological neurons. Implementing such paradigms in hardware requires emerging non-volatile memories (eNVMs), such as memristors. Alternative approaches are also possible through ultra-low-power analog circuits and innovative stochastic neuron models designed with transistors operating in the subthreshold region. This thesis investigates and advances different hardware implementations of these approaches beyond the current state of the art. LiM is explored through the development of a novel memristor-based Smart-IMPLY (SIMPLY) architecture, overcoming the typical limitations of traditional material implication schemes. The study includes the characterization of self-directed channel (SDC) memristors employed in a custom FPGA-based LiM platform, and the first integrated circuit implementing SIMPLY operations, showing promising energy efficiency and scalability. Analog spiking neurons with memristive synapses are also investigated. A hardware Leaky-Integrate-and-Fire (LIF) neuron was analyzed in terms of mutual information transfer between pre- and post-synaptic signals, providing, for the first time, quantitative validation of neuromorphic hardware compared to biological microcircuits. However, the limitations of eNVM technologies still hinder large-scale adoption. To address this, analog CMOS circuits in a 130 nm technology were designed for ultra-low-power AI operations such as logarithm and softmax functions, with transistors biased in the subthreshold region. Compensation techniques were developed and validated to counteract temperature and process variations, extending the applicability of subthreshold analog architectures to energy-constrained environments. Finally, a novel neuromorphic paradigm, the active inference framework, was implemented in hardware for the first time. This model, describing the brain as an agent minimizing the deviation between expectations and sensory input, was translated into the electronic domain at the single-neuron level. A bio-inspired network for associative learning was then replicated. Results in terms of power consumption and area occupation reveal a quantitative resemblance to the biological brain never achieved before, paving the way for the hardware-level computational paradigm shift envisioned to drive the next generation of AI technologies.
L’intelligenza artificiale (AI) basata su reti neurali (NN) è ormai un pilastro della società moderna, capace di riconoscere schemi complessi e automatizzare processi in numerosi ambiti applicativi. Tuttavia, gli approcci tradizionali basati su cloud computing centralizzato, in cui grandi quantità di dati vengono elaborate da reti pre-addestrate, sollevano gravi problematiche ambientali e di privacy a causa dell’elevato consumo energetico e della necessità di comunicazione continua. Il numero crescente di data center, oggi responsabili di oltre 1.000 TWh di consumo elettrico annuo, comporta un impatto significativo in termini di energia, acqua e emissioni, aggravato dall’addestramento di reti neurali sempre più complesse. Oltre all’ottimizzazione algoritmica, queste considerazioni evidenziano la necessità di un cambiamento di paradigma verso soluzioni hardware che permettano l’elaborazione e l’apprendimento direttamente su dispositivi edge a bassissimo consumo, riducendo la dipendenza dal cloud e tutelando la privacy degli utenti. In questo contesto, gli approcci hardware bio-ispirati, ispirati all’efficienza energetica del cervello umano, rappresentano una direzione promettente. A seconda dei meccanismi biologici emulati e del livello di astrazione adottato, possono essere esplorate diverse strategie circuitali. Tra le più rilevanti vi sono l’integrazione di calcolo e memoria negli stessi dispositivi (Logic-in-Memory, LiM) e l’emulazione dell’attività di spike e della plasticità sinaptica. L’implementazione di tali paradigmi richiede l’uso di memorie non volatili emergenti (eNVM), come i memristor. Questa tesi esplora e avanza diverse implementazioni hardware di tali approcci, andando oltre lo stato dell’arte. In particolare, è stata sviluppata una nuova architettura LiM denominata Smart-IMPLY (SIMPLY), basata su memristor, che supera le limitazioni delle tradizionali logiche di implicazione materiale. Il lavoro include la caratterizzazione di memristor a canale auto-diretto (SDC), utilizzati in una piattaforma LiM basata su FPGA, e la realizzazione del primo circuito integrato per operazioni SIMPLY, con risultati promettenti in termini di efficienza e scalabilità. Sono stati inoltre analizzati neuroni spiking analogici con sinapsi memristive. Un neurone hardware di tipo Leaky-Integrate-and-Fire (LIF) è stato studiato in termini di trasferimento di informazione mutua tra segnali pre- e post-sinaptici, fornendo una validazione quantitativa del comportamento neuromorfico rispetto a microcircuiti biologici. Tuttavia, le attuali limitazioni tecnologiche delle eNVM ne ostacolano l’adozione su larga scala. Per affrontare tali limiti, sono stati progettati circuiti CMOS analogici in tecnologia da 130 nm per operazioni di AI a bassissimo consumo, come funzioni logaritmica e softmax, con transistori operanti nella regione di sottosoglia. Sono state implementate tecniche di compensazione per mitigare variazioni di temperatura e processo, ampliando l’applicabilità delle architetture che operano in sottosoglia in scenari energeticamente critici. Infine, un nuovo paradigma neuromorfico basato sull’inferenza attiva è stato implementato in hardware per la prima volta. Il modello, che descrive il cervello come un agente capace di minimizzare la discrepanza tra aspettative interne e input sensoriali, è stato tradotto nel dominio elettronico a livello di singolo neurone e impiegato in una rete bio-ispirata per l’apprendimento associativo. I risultati, in termini di consumo e area, mostrano un’evidente somiglianza quantitativa con il cervello biologico, aprendo la strada a un nuovo paradigma computazionale hardware per l’AI sostenibile del futuro.
Progettazione di circuiti ispirati al cervello e a basso consumo per l’edge computing: dal Logic-in-Memory ai neuroni inferenziali
BENATTI, LORENZO
2026
Abstract
Artificial intelligence (AI) based on neural networks (NNs) has become a cornerstone of modern society, enabling efficient pattern recognition and automation across a wide range of applications. However, traditional AI approaches relying on centralized cloud computing, where large volumes of user data are processed through pre-trained NNs, raise environmental and privacy concerns due to their high computational and communication demands. The growing number of data centers, now exceeding 1,000 TWh of yearly electricity consumption, entails massive energy and water usage, further amplified by the training of increasingly complex NNs. Beyond algorithmic optimization, these considerations highlight the need for a paradigm shift toward hardware-level solutions, enabling ultra-low-power NN training and data analysis directly on edge devices, thus reducing dependence on energy-hungry cloud infrastructures while preserving user privacy. In this context, bio-inspired hardware approaches, taking inspiration from the brain’s ability to perform complex computations with outstanding energy efficiency, offer a promising path. Depending on the biological mechanisms emulated and the abstraction level adopted, different circuital strategies can be explored. Key enablers for overcoming the limitations of conventional architectures include the merging of computation and memory (Logic-in-Memory, LiM) and the emulation of spiking activity and synaptic plasticity in biological neurons. Implementing such paradigms in hardware requires emerging non-volatile memories (eNVMs), such as memristors. Alternative approaches are also possible through ultra-low-power analog circuits and innovative stochastic neuron models designed with transistors operating in the subthreshold region. This thesis investigates and advances different hardware implementations of these approaches beyond the current state of the art. LiM is explored through the development of a novel memristor-based Smart-IMPLY (SIMPLY) architecture, overcoming the typical limitations of traditional material implication schemes. The study includes the characterization of self-directed channel (SDC) memristors employed in a custom FPGA-based LiM platform, and the first integrated circuit implementing SIMPLY operations, showing promising energy efficiency and scalability. Analog spiking neurons with memristive synapses are also investigated. A hardware Leaky-Integrate-and-Fire (LIF) neuron was analyzed in terms of mutual information transfer between pre- and post-synaptic signals, providing, for the first time, quantitative validation of neuromorphic hardware compared to biological microcircuits. However, the limitations of eNVM technologies still hinder large-scale adoption. To address this, analog CMOS circuits in a 130 nm technology were designed for ultra-low-power AI operations such as logarithm and softmax functions, with transistors biased in the subthreshold region. Compensation techniques were developed and validated to counteract temperature and process variations, extending the applicability of subthreshold analog architectures to energy-constrained environments. Finally, a novel neuromorphic paradigm, the active inference framework, was implemented in hardware for the first time. This model, describing the brain as an agent minimizing the deviation between expectations and sensory input, was translated into the electronic domain at the single-neuron level. A bio-inspired network for associative learning was then replicated. Results in terms of power consumption and area occupation reveal a quantitative resemblance to the biological brain never achieved before, paving the way for the hardware-level computational paradigm shift envisioned to drive the next generation of AI technologies.| File | Dimensione | Formato | |
|---|---|---|---|
|
Benatti.pdf
embargo fino al 29/03/2029
Licenza:
Tutti i diritti riservati
Dimensione
20.52 MB
Formato
Adobe PDF
|
20.52 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/362890
URN:NBN:IT:UNIMORE-362890