This doctoral research provides an in-depth investigation into driver cognitive state recognition through the development of a next-generation wearable edge-AI sensor platform, the design and execution of advanced experimental setups and analysis, and the creation of innovative deep learning (DL) models for multimodal sensor fusion, achieving high classification accuracy. A series of carefully designed driving simulator experiments evaluated stress, attention, motion sickness, and mental engagement across diverse scenarios, encompassing both manual and autonomous driving phases. Physiological signals, including six-channel electroencephalogram (EEG) signals, and in specific studies, skin potential response (SPR) and electrocardiogram (ECG) signals, were recorded using custom-developed graphical user interfaces (GUIs) that enabled real-time data acquisition, synchronization, filtering, and storage. Comprehensive preprocessing techniques, such as artifact removal and advanced filtering methods, ensured high data reliability and quality. Stress assessment focused on EEG beta band power, a well-established marker of stress and cognitive load, revealing consistently higher beta power during manual driving compared to autonomous phases. Attention levels were quantified through eye blink rate (EBR) derived from EEG frontal channels, with results demonstrating a negative correlation between EBR and beta power, indicating that heightened attention corresponds to reduced blink frequency. Motion sickness was investigated through sensory conflict experiments, with findings establishing the beta/alpha power ratio as a robust indicator of motion sickness. This research further advanced the field by developing and evaluating DL models for mental engagement prediction through multimodal sensor fusion. A proposed feature-fusion approach integrating EEG, SPR, and ECG signals achieved superior accuracy (82.2%), outperforming a data-fusion model (74.4%). These findings underscore the effectiveness of multimodal physiological signal fusion using advanced convolutional neural networks (CNNs). Another contribution of this research is the conceptualization and development of VersaSens, a modular and multimodal wearable platform designed for next-generation edge-AI applications. VersaSens integrates sensor, processor, and co-processor modules, providing versatility, scalability, and energy-efficient performance. A CNN-based DL model for mental engagement prediction was deployed on the platform’s System-on-Chip (SoC), achieving an accuracy of 72.8% with low energy consumption, validating its real-time edge-AI capabilities. In conclusion, this research marks a significant advancement in driver cognitive state recognition through the integration of multimodal sensor fusion, advanced DL architectures, and edge-AI wearable technologies. These contributions establish a strong foundation for future innovations in driving simulators, autonomous vehicles, human-machine interaction, and next-generation smart wearable systems.

This doctoral research provides an in-depth investigation into driver cognitive state recognition through the development of a next-generation wearable edge-AI sensor platform, the design and execution of advanced experimental setups and analysis, and the creation of innovative deep learning (DL) models for multimodal sensor fusion, achieving high classification accuracy. A series of carefully designed driving simulator experiments evaluated stress, attention, motion sickness, and mental engagement across diverse scenarios, encompassing both manual and autonomous driving phases. Physiological signals, including six-channel electroencephalogram (EEG) signals, and in specific studies, skin potential response (SPR) and electrocardiogram (ECG) signals, were recorded using custom-developed graphical user interfaces (GUIs) that enabled real-time data acquisition, synchronization, filtering, and storage. Comprehensive preprocessing techniques, such as artifact removal and advanced filtering methods, ensured high data reliability and quality. Stress assessment focused on EEG beta band power, a well-established marker of stress and cognitive load, revealing consistently higher beta power during manual driving compared to autonomous phases. Attention levels were quantified through eye blink rate (EBR) derived from EEG frontal channels, with results demonstrating a negative correlation between EBR and beta power, indicating that heightened attention corresponds to reduced blink frequency. Motion sickness was investigated through sensory conflict experiments, with findings establishing the beta/alpha power ratio as a robust indicator of motion sickness. This research further advanced the field by developing and evaluating DL models for mental engagement prediction through multimodal sensor fusion. A proposed feature-fusion approach integrating EEG, SPR, and ECG signals achieved superior accuracy (82.2%), outperforming a data-fusion model (74.4%). These findings underscore the effectiveness of multimodal physiological signal fusion using advanced convolutional neural networks (CNNs). Another contribution of this research is the conceptualization and development of VersaSens, a modular and multimodal wearable platform designed for next-generation edge-AI applications. VersaSens integrates sensor, processor, and co-processor modules, providing versatility, scalability, and energy-efficient performance. A CNN-based DL model for mental engagement prediction was deployed on the platform’s System-on-Chip (SoC), achieving an accuracy of 72.8% with low energy consumption, validating its real-time edge-AI capabilities. In conclusion, this research marks a significant advancement in driver cognitive state recognition through the integration of multimodal sensor fusion, advanced DL architectures, and edge-AI wearable technologies. These contributions establish a strong foundation for future innovations in driving simulators, autonomous vehicles, human-machine interaction, and next-generation smart wearable systems.

Automatic Measurement of Driving Simulators Acceptability through Design of Wearable Sensors and Implementation of Machine Learning Algorithms

AMINOSHARIEH NAJAFI, TARANEH
2025

Abstract

This doctoral research provides an in-depth investigation into driver cognitive state recognition through the development of a next-generation wearable edge-AI sensor platform, the design and execution of advanced experimental setups and analysis, and the creation of innovative deep learning (DL) models for multimodal sensor fusion, achieving high classification accuracy. A series of carefully designed driving simulator experiments evaluated stress, attention, motion sickness, and mental engagement across diverse scenarios, encompassing both manual and autonomous driving phases. Physiological signals, including six-channel electroencephalogram (EEG) signals, and in specific studies, skin potential response (SPR) and electrocardiogram (ECG) signals, were recorded using custom-developed graphical user interfaces (GUIs) that enabled real-time data acquisition, synchronization, filtering, and storage. Comprehensive preprocessing techniques, such as artifact removal and advanced filtering methods, ensured high data reliability and quality. Stress assessment focused on EEG beta band power, a well-established marker of stress and cognitive load, revealing consistently higher beta power during manual driving compared to autonomous phases. Attention levels were quantified through eye blink rate (EBR) derived from EEG frontal channels, with results demonstrating a negative correlation between EBR and beta power, indicating that heightened attention corresponds to reduced blink frequency. Motion sickness was investigated through sensory conflict experiments, with findings establishing the beta/alpha power ratio as a robust indicator of motion sickness. This research further advanced the field by developing and evaluating DL models for mental engagement prediction through multimodal sensor fusion. A proposed feature-fusion approach integrating EEG, SPR, and ECG signals achieved superior accuracy (82.2%), outperforming a data-fusion model (74.4%). These findings underscore the effectiveness of multimodal physiological signal fusion using advanced convolutional neural networks (CNNs). Another contribution of this research is the conceptualization and development of VersaSens, a modular and multimodal wearable platform designed for next-generation edge-AI applications. VersaSens integrates sensor, processor, and co-processor modules, providing versatility, scalability, and energy-efficient performance. A CNN-based DL model for mental engagement prediction was deployed on the platform’s System-on-Chip (SoC), achieving an accuracy of 72.8% with low energy consumption, validating its real-time edge-AI capabilities. In conclusion, this research marks a significant advancement in driver cognitive state recognition through the integration of multimodal sensor fusion, advanced DL architectures, and edge-AI wearable technologies. These contributions establish a strong foundation for future innovations in driving simulators, autonomous vehicles, human-machine interaction, and next-generation smart wearable systems.
21-mag-2025
Italiano
This doctoral research provides an in-depth investigation into driver cognitive state recognition through the development of a next-generation wearable edge-AI sensor platform, the design and execution of advanced experimental setups and analysis, and the creation of innovative deep learning (DL) models for multimodal sensor fusion, achieving high classification accuracy. A series of carefully designed driving simulator experiments evaluated stress, attention, motion sickness, and mental engagement across diverse scenarios, encompassing both manual and autonomous driving phases. Physiological signals, including six-channel electroencephalogram (EEG) signals, and in specific studies, skin potential response (SPR) and electrocardiogram (ECG) signals, were recorded using custom-developed graphical user interfaces (GUIs) that enabled real-time data acquisition, synchronization, filtering, and storage. Comprehensive preprocessing techniques, such as artifact removal and advanced filtering methods, ensured high data reliability and quality. Stress assessment focused on EEG beta band power, a well-established marker of stress and cognitive load, revealing consistently higher beta power during manual driving compared to autonomous phases. Attention levels were quantified through eye blink rate (EBR) derived from EEG frontal channels, with results demonstrating a negative correlation between EBR and beta power, indicating that heightened attention corresponds to reduced blink frequency. Motion sickness was investigated through sensory conflict experiments, with findings establishing the beta/alpha power ratio as a robust indicator of motion sickness. This research further advanced the field by developing and evaluating DL models for mental engagement prediction through multimodal sensor fusion. A proposed feature-fusion approach integrating EEG, SPR, and ECG signals achieved superior accuracy (82.2%), outperforming a data-fusion model (74.4%). These findings underscore the effectiveness of multimodal physiological signal fusion using advanced convolutional neural networks (CNNs). Another contribution of this research is the conceptualization and development of VersaSens, a modular and multimodal wearable platform designed for next-generation edge-AI applications. VersaSens integrates sensor, processor, and co-processor modules, providing versatility, scalability, and energy-efficient performance. A CNN-based DL model for mental engagement prediction was deployed on the platform’s System-on-Chip (SoC), achieving an accuracy of 72.8% with low energy consumption, validating its real-time edge-AI capabilities. In conclusion, this research marks a significant advancement in driver cognitive state recognition through the integration of multimodal sensor fusion, advanced DL architectures, and edge-AI wearable technologies. These contributions establish a strong foundation for future innovations in driving simulators, autonomous vehicles, human-machine interaction, and next-generation smart wearable systems.
Physiological Signal; Deep Learning; Embedded Algorithm; Driver Mental State; Edge-AI Wearable
ESSENI, David
AFFANNI, Antonio
Università degli Studi di Udine
File in questo prodotto:
File Dimensione Formato  
Taraneh_PhD_thesis_final.pdf

embargo fino al 21/11/2026

Dimensione 39.58 MB
Formato Adobe PDF
39.58 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/217533
Il codice NBN di questa tesi è URN:NBN:IT:UNIUD-217533