An Electroencephalography (EEG)-based Brain-Computer Interface (BCI) is a system able to connect the human brain and external devices by analyzing EEG signals, translating the brain activity patterns into instructions for an interactive application. Initially, EEG-based BCI solutions were developed for medical purposes in clinical and rehabilitation applications, primarily to assist patients in regaining normal life functions. Beyond this original goal, these systems have also gained importance in non-medical fields such as cybersecurity and neuroscience applications. On this account, this thesis shows how the EEG signal can be directly exploited for solving person biometric identification, emotion recognition, and limbs activation classification tasks. Since most existing EEG-based biometric systems don’t exploit the time-frequency information of EEG signals, this thesis introduces a novel identification system using graph representations, where nodes represent EEG channels signals and edges denote the Functional Connectivity (FC) measure between pairs of channels. The model, based on Graph Convolutional Neural Networks (GCNNs), integrates spatio-temporal and functional features, capturing local and global brain activity. Tested on PhysioNet and Multi-subject, Multi-session, and Multi-task Database for investigation of EEG Commonality and Variability (M3CV) datasets, the method demonstrated strong generalization across various human states (resting and active) and outperformed State-Of-the-Art (SOTA) EEG biometrics techniques in specific tasks. Regarding emotion recognition, has been proposed an innovative framework, namely Empátheia, able to encode EEG signals as compact images, preserving the original spatio-temporal information, and recognizing the associated emotion. Using the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, the original EEG signals are encoded as images, or atlases, following a spatio-temporal layout. Then, different deep learning models have been designed and tuned to classify the emotions captured in the produced atlases. Tests on the SJTU Emotion EEG Dataset (SEED) dataset showed high performance and efficient data representation, suggesting new possibilities for EEG-based emotion analysis. Instead, a novel multi-stream 1D Convolutional Neural Network (CNN) architecture is proposed for limbs activation classification. This method processes EEG signals through four convolutional streams with varying kernel sizes to capture information at different time scales. The resulting features are combined and fed to a dense classifier to determine limbs movement. Experiments on the PhysioNet EEG dataset showed that this model outperforms existing methods in both cross-subject and intra-subject settings.
Beyond brainwaves: exploring emotions, identity, and motor imagery through EEG-based BCI
DI MAMBRO, ANGELO
2025
Abstract
An Electroencephalography (EEG)-based Brain-Computer Interface (BCI) is a system able to connect the human brain and external devices by analyzing EEG signals, translating the brain activity patterns into instructions for an interactive application. Initially, EEG-based BCI solutions were developed for medical purposes in clinical and rehabilitation applications, primarily to assist patients in regaining normal life functions. Beyond this original goal, these systems have also gained importance in non-medical fields such as cybersecurity and neuroscience applications. On this account, this thesis shows how the EEG signal can be directly exploited for solving person biometric identification, emotion recognition, and limbs activation classification tasks. Since most existing EEG-based biometric systems don’t exploit the time-frequency information of EEG signals, this thesis introduces a novel identification system using graph representations, where nodes represent EEG channels signals and edges denote the Functional Connectivity (FC) measure between pairs of channels. The model, based on Graph Convolutional Neural Networks (GCNNs), integrates spatio-temporal and functional features, capturing local and global brain activity. Tested on PhysioNet and Multi-subject, Multi-session, and Multi-task Database for investigation of EEG Commonality and Variability (M3CV) datasets, the method demonstrated strong generalization across various human states (resting and active) and outperformed State-Of-the-Art (SOTA) EEG biometrics techniques in specific tasks. Regarding emotion recognition, has been proposed an innovative framework, namely Empátheia, able to encode EEG signals as compact images, preserving the original spatio-temporal information, and recognizing the associated emotion. Using the Processing and transfeR of Interaction States and Mappings through Image-based eNcoding (PRISMIN) framework, the original EEG signals are encoded as images, or atlases, following a spatio-temporal layout. Then, different deep learning models have been designed and tuned to classify the emotions captured in the produced atlases. Tests on the SJTU Emotion EEG Dataset (SEED) dataset showed high performance and efficient data representation, suggesting new possibilities for EEG-based emotion analysis. Instead, a novel multi-stream 1D Convolutional Neural Network (CNN) architecture is proposed for limbs activation classification. This method processes EEG signals through four convolutional streams with varying kernel sizes to capture information at different time scales. The resulting features are combined and fed to a dense classifier to determine limbs movement. Experiments on the PhysioNet EEG dataset showed that this model outperforms existing methods in both cross-subject and intra-subject settings.File | Dimensione | Formato | |
---|---|---|---|
Tesi_dottorato_DiMambro.pdf
accesso aperto
Dimensione
10.02 MB
Formato
Adobe PDF
|
10.02 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/188920
URN:NBN:IT:UNIROMA1-188920