Deployment of highly automated vehicles relies on the recognition of driving scenarios for context adaptivity, maneuver decision-making, and validation & verification of Automated Driving Functions (ADFs). A key challenge for ADFs is managing all possible situations within a target Operational Design Domain (ODD), which is defined in terms of scenarios. However, public datasets of videos annotated with driving scenarios are very limited. We address this gap by proposing a framework for generating synthetic datasets using the CarLA simulator. The framework aims to represent a wide range of highway driving conditions, including road configurations, traffic dynamics, and environmental factors. The core of the research involves the creation of synthetic driving scenarios simulating traffic patterns and environmental conditions based on multi-parametric descriptions. The framework supports 17 highway driving scenario classes. As a significant example, we have used the proposed framework to build a dataset based on real-world data distributions, particularly coming from the MOOVE project dataset, which comprises 1 million kilometers driven across 15 European countries. The framework is then validated and assessed under four main dimensions. First, we analyze the distribution of the generated features, to check correspondence with real-world data. Second, we evaluate the simulated clip generation efficiency. Third, we perform scenario recognition through a set of state-of-the-art neural network architectures, including a residual three-dimensional convolutional neural network, enhanced with squeeze-and-excitation blocks and convolutional block attention modules. The scenario recognition task is framed as an action recognition problem using overlapping time-windows. Fourth, we assess the generalization ability of the networks trained on the synthetic dataset, by assessing their performance on a testing set of manually annotated real-world videos. Results show that the framework is able to efficiently generate a wide variety of videoclips, closely mirroring the MOOVE real-world collection. The dataset is particularly challenging for state-of-the-art recognizers, particularly in identifying highly variable action-length scenarios, like brakes, or capturing subtle differences, for instance in distinguishing various types of vehicle following scenarios. Transferability to real-world is promising but also shows gaps that may be covered both with partial re-training and by augmenting the synthetic datasets. The recognizers operate in real-time, making them suitable not only to support offline ADF verification & validation, but also for implementing online ADF adaptivity and context-aware decision making. To support further research and development the dataset will be made publicly available.
A Simulation-based Framework for Highway Driving Scenario Recognition
COSSU, MARIANNA
2025
Abstract
Deployment of highly automated vehicles relies on the recognition of driving scenarios for context adaptivity, maneuver decision-making, and validation & verification of Automated Driving Functions (ADFs). A key challenge for ADFs is managing all possible situations within a target Operational Design Domain (ODD), which is defined in terms of scenarios. However, public datasets of videos annotated with driving scenarios are very limited. We address this gap by proposing a framework for generating synthetic datasets using the CarLA simulator. The framework aims to represent a wide range of highway driving conditions, including road configurations, traffic dynamics, and environmental factors. The core of the research involves the creation of synthetic driving scenarios simulating traffic patterns and environmental conditions based on multi-parametric descriptions. The framework supports 17 highway driving scenario classes. As a significant example, we have used the proposed framework to build a dataset based on real-world data distributions, particularly coming from the MOOVE project dataset, which comprises 1 million kilometers driven across 15 European countries. The framework is then validated and assessed under four main dimensions. First, we analyze the distribution of the generated features, to check correspondence with real-world data. Second, we evaluate the simulated clip generation efficiency. Third, we perform scenario recognition through a set of state-of-the-art neural network architectures, including a residual three-dimensional convolutional neural network, enhanced with squeeze-and-excitation blocks and convolutional block attention modules. The scenario recognition task is framed as an action recognition problem using overlapping time-windows. Fourth, we assess the generalization ability of the networks trained on the synthetic dataset, by assessing their performance on a testing set of manually annotated real-world videos. Results show that the framework is able to efficiently generate a wide variety of videoclips, closely mirroring the MOOVE real-world collection. The dataset is particularly challenging for state-of-the-art recognizers, particularly in identifying highly variable action-length scenarios, like brakes, or capturing subtle differences, for instance in distinguishing various types of vehicle following scenarios. Transferability to real-world is promising but also shows gaps that may be covered both with partial re-training and by augmenting the synthetic datasets. The recognizers operate in real-time, making them suitable not only to support offline ADF verification & validation, but also for implementing online ADF adaptivity and context-aware decision making. To support further research and development the dataset will be made publicly available.File | Dimensione | Formato | |
---|---|---|---|
phdunige_4226983.pdf
accesso aperto
Dimensione
9.93 MB
Formato
Adobe PDF
|
9.93 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/199671
URN:NBN:IT:UNIGE-199671