The growing integration of automated driving systems (ADS) in modern vehicles challenges traditional paradigms of human–machine interaction. As the locus of control shifts from human drivers to intelligent systems, critical factors such as trust calibration, cognitive load, emotional engagement, and usability emerge as key determinants for safe and effective automation. This thesis explores how users perceive, adapt to, and emotionally respond to varying levels of vehicle automation, leveraging high-fidelity simulation environments augmented with multimodal physiological sensing and affective computing techniques. The research builds upon five interlinked studies, spanning contexts from driver assistance (L0–L2) to fully autonomous scenarios (L4–L5). Both fixed-base and immersive Virtual Reality (VR) simulators were employed to systematically analyze behavioral performance, situational awareness, and cognitive-affective responses, using data such as electrodermal activity (EDA), heart rate variability (HRV), gaze patterns, and facial expression analysis. A complementary study extends beyond the driving domain, investigating olfactory stimuli as a mechanism for improving attention and memory retention, thereby contributing to the broader principles of multisensory human–machine interface (HMI) design. Artificial intelligence methods, such as convolutional neural networks, were applied as analytical tools to interpret facial expressions and assist in detecting distraction or stress, complementing the physiological and behavioral measures. While not the primary focus of the thesis, these AI-based techniques enhanced the robustness of the analyses and demonstrated the potential of affective computing for developing adaptive and user-aware systems. Methodologically, the thesis advances simulation science by refining approaches to fidelity calibration, cross-modal data synchronization, and experimental balancing. Unity3D, a widely used game engine for creating immersive virtual environments, and CarMaker, a professional simulation software for testing and validating vehicle dynamics, were combined to create interactive scenarios incorporating multimodal HMI configurations including visual, auditory, gestural, and olfactory modalities. The methodological framework ensures both ecological validity and repeatability, enabling meaningful interpretation of user behavior in complex driving contexts. From a scientific perspective, this work contributes to understanding trust dynamics, cognitive-affective adaptation, and multisensory interaction in automated mobility. From an applied standpoint, it offers design guidelines for transparent, adaptive, and user-centered HMIs, particularly relevant for OEMs and technology providers developing automation from Level 2 to Level 5. Ultimately, the thesis positions simulation not merely as a testing platform but as a core research environment that bridges human factors engineering, affective computing, and sensory augmentation. By synthesizing behavioral metrics, physiological sensing, and multisensory experimentation, it establishes a comprehensive framework for designing the next generation of emotionally intelligent and user-centered automated vehicles.
La crescente integrazione dei sistemi di guida automatizzata (ADS) nei veicoli moderni mette in discussione i paradigmi tradizionali dell’interazione uomo–macchina. Con il progressivo spostamento del controllo dai conducenti umani ai sistemi intelligenti, fattori critici come la calibrazione della fiducia, il carico cognitivo, il coinvolgimento emotivo e l’usabilità emergono come elementi chiave per garantire un’automazione sicura ed efficace. Questa tesi analizza come gli utenti percepiscono, si adattano e rispondono emotivamente a diversi livelli di automazione del veicolo, utilizzando ambienti di simulazione ad alta fedeltà arricchiti con sensori fisiologici multimodali e tecniche di affective computing. La ricerca si basa su cinque studi interconnessi, che coprono contesti che vanno dalla assistenza alla guida (L0–L2) fino a scenari di completa autonomia (L4–L5). Sono stati impiegati sia simulatori fissi sia simulatori immersivi in realtà virtuale (VR) per analizzare in modo sistematico le prestazioni comportamentali, la consapevolezza situazionale e le risposte cognitivo-affettive, attraverso dati come l’attività elettrodermica (EDA), la variabilità della frequenza cardiaca (HRV), i pattern di sguardo e l’analisi delle espressioni facciali. Uno studio complementare si estende oltre il contesto automobilistico, esplorando l’uso di stimoli olfattivi come meccanismo per migliorare l’attenzione e la memorizzazione, fornendo così spunti trasferibili per la progettazione di interfacce uomo–macchina (HMI) multisensoriali. Metodi di intelligenza artificiale, come le reti neurali convoluzionali, sono stati applicati come strumenti analitici per interpretare le espressioni facciali e individuare segnali di distrazione o stress, integrando le misure fisiologiche e comportamentali. Pur non rappresentando il focus principale della tesi, queste tecniche basate sull’AI hanno aumentato la robustezza delle analisi e dimostrato il potenziale dell’affective computing nello sviluppo di sistemi adattivi e attenti agli stati dell’utente. Dal punto di vista metodologico, la tesi contribuisce alla scienza della simulazione affinando i metodi di calibrazione della fedeltà, sincronizzazione dei dati multimodali e bilanciamento sperimentale. Unity3D e CarMaker sono stati utilizzati per creare scenari interattivi che includono configurazioni HMI multimodali con componenti visive, uditive, gestuali e olfattive. Questo quadro metodologico garantisce al contempo validità ecologica e ripetibilità, permettendo un’interpretazione significativa dei comportamenti degli utenti in contesti di guida complessi. Dal punto di vista scientifico, il lavoro contribuisce alla comprensione delle dinamiche di fiducia, dell’adattamento cognitivo-affettivo e dell’interazione multisensoriale nell’ambito della mobilità automatizzata. Dal punto di vista applicativo, propone linee guida per lo sviluppo di HMI trasparenti, adattive e centrate sull’utente, particolarmente rilevanti per i costruttori (OEM) e i fornitori tecnologici impegnati nello sviluppo di automazione di Livello 2–5. In ultima analisi, la tesi considera la simulazione non solo come uno strumento di test, ma come un vero e proprio ambiente di ricerca che integra ingegneria dei fattori umani, affective computing e arricchimento sensoriale. Combinando metriche comportamentali, sensori fisiologici e sperimentazioni multisensoriali, viene definito un quadro complessivo per investigare e progettare la collaborazione uomo–macchina nei sistemi di trasporto intelligenti. I risultati contribuiscono alla realizzazione di esperienze di mobilità emotivamente consapevoli, basate sulla fiducia e di supporto cognitivo.
User studies in various automation levels by driving simulator
Yuan, Shi
2025
Abstract
The growing integration of automated driving systems (ADS) in modern vehicles challenges traditional paradigms of human–machine interaction. As the locus of control shifts from human drivers to intelligent systems, critical factors such as trust calibration, cognitive load, emotional engagement, and usability emerge as key determinants for safe and effective automation. This thesis explores how users perceive, adapt to, and emotionally respond to varying levels of vehicle automation, leveraging high-fidelity simulation environments augmented with multimodal physiological sensing and affective computing techniques. The research builds upon five interlinked studies, spanning contexts from driver assistance (L0–L2) to fully autonomous scenarios (L4–L5). Both fixed-base and immersive Virtual Reality (VR) simulators were employed to systematically analyze behavioral performance, situational awareness, and cognitive-affective responses, using data such as electrodermal activity (EDA), heart rate variability (HRV), gaze patterns, and facial expression analysis. A complementary study extends beyond the driving domain, investigating olfactory stimuli as a mechanism for improving attention and memory retention, thereby contributing to the broader principles of multisensory human–machine interface (HMI) design. Artificial intelligence methods, such as convolutional neural networks, were applied as analytical tools to interpret facial expressions and assist in detecting distraction or stress, complementing the physiological and behavioral measures. While not the primary focus of the thesis, these AI-based techniques enhanced the robustness of the analyses and demonstrated the potential of affective computing for developing adaptive and user-aware systems. Methodologically, the thesis advances simulation science by refining approaches to fidelity calibration, cross-modal data synchronization, and experimental balancing. Unity3D, a widely used game engine for creating immersive virtual environments, and CarMaker, a professional simulation software for testing and validating vehicle dynamics, were combined to create interactive scenarios incorporating multimodal HMI configurations including visual, auditory, gestural, and olfactory modalities. The methodological framework ensures both ecological validity and repeatability, enabling meaningful interpretation of user behavior in complex driving contexts. From a scientific perspective, this work contributes to understanding trust dynamics, cognitive-affective adaptation, and multisensory interaction in automated mobility. From an applied standpoint, it offers design guidelines for transparent, adaptive, and user-centered HMIs, particularly relevant for OEMs and technology providers developing automation from Level 2 to Level 5. Ultimately, the thesis positions simulation not merely as a testing platform but as a core research environment that bridges human factors engineering, affective computing, and sensory augmentation. By synthesizing behavioral metrics, physiological sensing, and multisensory experimentation, it establishes a comprehensive framework for designing the next generation of emotionally intelligent and user-centered automated vehicles.| File | Dimensione | Formato | |
|---|---|---|---|
|
Thesis.pdf
accesso solo da BNCF e BNCR
Licenza:
Tutti i diritti riservati
Dimensione
9.12 MB
Formato
Adobe PDF
|
9.12 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/356495
URN:NBN:IT:POLIMI-356495