Sound field analysis and processing refers to the study of pressure waves for the characterization and manipulation of acoustic fields. It involves measuring, investigating, and extracting the acoustic properties within a specific environment such as rooms, offices, concert halls, and other enclosed areas. The knowledge of sound generation and propagation and the prediction of wave interactions with multiple objects is a highly active area of research within the signal-processing community. Indeed, the description of sound fields enables the creation of immersive and high-quality sound experiences in a wide range of applications, from the audio production and musical instrument industries to virtual reality and entertainment applications. In this thesis, we propose novel methodologies for both the analysis and processing of sound field taking advantages of the physical knowledge of acoustic models and the extracted information of available recordings. We define compact representations to encode the acoustic spatial information of sound sources and we develop data-efficient solutions for the interpolation and manipulation of the sound scenes. Specifically, we provide novel techniques for the characterization of vibrating sources and for the visualization of acoustic fields to effectively describe the interaction of acoustic entities within the space. Moreover, we face the problem of sound field reconstruction and speech separation with flexible solutions with respect to the specific microphone configuration adopted. The problem of analyzing and extrapolating the whole pressure field through microphone acquisitions has been widely studied in the literature. The existing solutions can be broadly classified into those based on physical models of the acoustic propagation and on data-driven approaches. In the first case, mathematical formulations of acoustic fields, such as plane-wave decomposition, are employed to describe the interactions of the acoustic system through a set of equations. Conversely, the second case relies on chained mathematical operations, e.g., Deep Neural Networks (DNNs), to learn a representation of the available measured data providing estimates of the desired solution. However, the main challenges of today's audio solutions are represented by the strong modeling assumptions that limit the accuracy and flexibility of model-based approaches and the generalization capabilities of classical data-driven methods, which typically depend on the quality and dimension of the dataset adopted during training. Therefore, new solutions are needed to overcome these limitations promoting the full description of acoustic fields even from a limited set of available measurements. Here, we present novel methodologies that combine the advantages of Deep Learning (DL) strategies and model-based methods in the context of sound field analysis and processing. The devised model-aided data-driven approaches are able to characterize the pressure radiation of vibrating sources representing acoustic features of the sound field in unique representations that can be investigated and manipulated to extract useful information. The main feature of the proposed solutions is represented by the integration of DNNs to exploit the available data with the prior knowledge of the underlying well-known signal models based on the physics of acoustics. In the context of sound field analysis, information about the properties of sound sources play a crucial role to describe the acoustics of a given space. The characterization of the vibrating sources, for example, is a key factor to model their pressure radiation and predict the interactions with nearby objects and surfaces. In this sense, Nearfield Acoustic Holography (NAH) enables accurate predictions of the vibrational field on sources in a fully contactless way. This property represents an essential requirement in specific cases where the structure under analysis is particularly fragile, e.g., for musical instruments. Thanks to the collaboration between our research group and Violin Museum settled in Cremona, Italy, which houses several stringed instruments made by the luthier Antonio Stradivari, we gained the experience of simulating and predicting the dynamic properties of musical instruments. In particular, we have the opportunity to study and investigate their acoustic properties and to develop new techniques for providing non-invasive measurements. For this reason, we introduce an innovative data-driven approach based on Convolutional Neural Networks (CNNs) to estimate the velocity field on the surface of different wood-made violin plates with variable outline and shapes starting from the pressure measurements acquired in their proximity. To the best of our knowledge, this is the first time a DL strategy is applied in the context of NAH and we prove its effectiveness with respect to model-based methods present in the literature. Moreover, we take advantage of the prior knowledge of the underlying propagation problem represented by the Kirchhoff-Helmholtz integral equation to increase the accuracy of the CNN estimates, thus providing explainable and physically meaningful solutions. It is worth noticing that the characterization of vibrating objects and the spatial features of the sound sources contribute to a better design, analysis, and understanding of the properties of sound fields. Such information is typically captured with multiple microphones, usually arranged in clusters or arrays within the environment. Acoustic imaging techniques represent a powerful approach to collect and encode the spatial information of the sound field. Specifically, they enable visual representations of the acoustic fields that can be easily investigated and manipulated. Here, we focus on the analysis of sound scenes where sources are concentrated in a confined space, e.g., for teleconferencing applications. We introduce a novel linear operator that maps the acoustic features captured by a circular microphone array surrounding the region of interest into a compact domain, denoted as "angular space", thus enabling the adoption of pattern analysis techniques to extract source properties. Differently from other acoustic imaging methods, the devised model is able to inherently combine the local information of multiple and distributed microphone arrays into a single and unique representation without complex projection or triangulation operations. We prove the ability of the angular space to efficiently describe the properties of real-world sound fields with examples of source localization application adopting different microphone setups. Regarding the processing of sound fields, typical application scenarios rely on a small set of available pressure data to extract target signals or identify hidden acoustic features of the sound scene. Nevertheless, the ever-evolving trends in immersive multimedia communication require the estimation of the whole acoustic field within a specific environment. Sound field reconstruction addresses this problem by recovering the pressure data in target locations starting from a small set of observations, thus requiring specific interpolation procedures. Although classical solutions are based on compressed and intuitive descriptions of the sound field, they are limited by the accuracy and flexibility trade-off between the simplified models and the actual behaviors of the acoustic fields. Recently, data-driven approaches proved their ability for the reconstruction of sound fields thanks to the extracted information from the available data. However, the majority of such DL methods provide accurate estimations considering only the magnitude of the sound fields and for height-invariant cases, i.e., in 2D. Here, we aim at recovering the entire acoustic field in a target 3D region of the environment by combining the feature extraction capabilities of DNNs with the propagation model of sound waves represented by the wave equation. Therefore, we define a time-domain neural network that is able to estimate the desired signals from a small and sparse set of real measurements increasing the reconstruction performance with respect to recent state-of-the-art approaches thanks to the physics-informed prior. Another audio processing task of particularly interest is represented by speech separation, which involves the isolation and enhancement of a specific speech signal of interest. In this thesis, we focus on the extraction of a target speaker placed in a noisy and reverberant environment from the mixture acquired by a linear microphone array, e.g., for hands-free interaction devices. Classical algorithms rely on beamforming techniques to spatially filter the sound field coming from a specific direction or on recent data-driven approaches to extract the desired signals directly from data. However, such methods lack of generalization with respect to different acoustic conditions and, in general, they are limited by the microphone array configurations adopted in the specific measurement setups. Therefore, we propose a novel real-time model that takes advantage of the combination of classical beamforming operators and data-driven strategies. Thanks to the adoption of a beamspace representation based on a predefined set of steerable directions, we define a lightweight CNN that is agnostic with respect to the measurement setup, i.e., number of microphone and inter-sensors distance of the array. We prove that the devised method is able to estimate the desired speaker even when the sound field is captured by a microphone array configuration not seen during the training phase, thus providing robust generalization capabilities in real-world scenarios. The promising results presented in this manuscript prove how combining features extracted from data with the prior knowledge given by the underlying physical models represents a powerful technique for audio applications. The devised model-aided data-driven approaches enable data-efficient solutions providing accuracy and flexibility properties required in daily-life acoustic devices. We accurately describe the source spatial properties into visual representations of the sound field to analyzed the acoustic scene. Successively, we provide robust generalization properties with respect to different measuring conditions and a limited number of observations for applications in practical scenarios. Therefore, we believe that this thesis can be considered as the starting point for a new class of algorithms for the characterization and manipulation of sound fields that take advantage form the description of the acoustic models and the learning potentialities of neural networks. Moreover, we envision the integration of this framework also for the rendering of synthesized acoustic environments. With the combination of model-based and data-driven methods, new space-time audio processing solutions will be developed for immersive audio applications that rely on a few increasingly smaller and spatially distributed sensors.
L’analisi e l’elaborazione del campo sonoro afferiscono allo studio delle onde di pressione per la caratterizzazione e la manipolazione dei campi acustici. Si tratta di misurare, studiare ed estrarre le proprietà acustiche all'interno di un ambiente specifico, quali stanze, uffici, sale da concerto o altre aree chiuse. La conoscenza della generazione e propagazione del suono e la previsione delle interazioni fra oggetti è un'area di ricerca molto attiva nella comunità di signal processing. Infatti, la descrizione dei campi sonori consente la creazione di esperienze sonore coinvolgenti e di alta qualità in una vasta gamma di applicazioni, dalle industrie di produzione audio e strumenti musicali alle applicazioni di intrattenimento e realtà virtuale. In questa tesi, proponiamo nuove metodologie per l'analisi e l'elaborazione del campo sonoro sfruttando la conoscenza fisica dei modelli acustici e le informazioni estratte delle registrazioni disponibili. Definiamo delle rappresentazioni compatte per codificare le informazioni spaziali acustiche delle sorgenti sonore e sviluppiamo soluzioni data-efficient per l'interpolazione e la manipolazione delle scene sonore. In particolare, forniamo nuove tecniche per la caratterizzazione delle sorgenti vibranti e per la visualizzazione dei campi acustici al fine di descrivere efficacemente l'interazione delle entità acustiche all'interno dello spazio. Inoltre, affrontiamo il problema della ricostruzione del campo sonoro e della separazione del parlato con soluzioni flessibili rispetto alla configurazione specifica delle schiere microfoniche adottate. Il problema di analizzare ed estrapolare l'intero campo di pressione attraverso acquisizioni di microfoni è stato ampiamente studiato in letteratura. Le soluzioni esistenti possono essere classificate sia in modelli fisici della propagazione acustica sia in approcci basati sui dati. Nel primo caso, le formulazioni matematiche dei campi acustici, come la decomposizione delle onde piane, sono utilizzate per descrivere le interazioni del sistema acustico attraverso un insieme di equazioni. Al contrario, il secondo caso si basa su operazioni matematiche concatenate, ad esempio con Deep Neural Networks (DNN), per apprendere una rappresentazione dei dati disponibili fornendo stime della soluzione desiderata. Tuttavia, le principali sfide delle soluzioni audio odierne sono rappresentate dalle forti ipotesi di modellazione che limitano la precisione e la flessibilità degli approcci basati sui modelli e le capacità di generalizzazione dei metodi basati sui dati, che, in genere, dipendono dalla qualità e dalla dimensione del set di dati adottato durante la fase di allenamento. Pertanto, sono necessarie nuove soluzioni per superare queste limitazioni promuovendo la caratterizzazione completa dei campi acustici, anche partendo da un set limitato di misure disponibili. Con questa tesi presentiamo nuove metodologie che combinano i vantaggi delle strategie di Deep Learning (DL) con quelli dei metodi basati su modelli nel contesto dell'analisi e dell'elaborazione del campo sonoro. Gli approcci basati sui dati e modelli che proponiamo sono in grado di caratterizzare la radiazione di pressione delle sorgenti vibranti in rappresentazioni uniche che possono essere studiate e manipolate per estrarre informazioni. La caratteristica principale delle soluzioni proposte è rappresentata dall'integrazione di DNN per sfruttare i dati disponibili con la conoscenza preventiva dei modelli di segnale basati sulla fisica dell'acustica. Nel contesto dell'analisi del campo sonoro, le informazioni sulle proprietà delle sorgenti sonore svolgono un ruolo cruciale per descrivere l'acustica di un dato spazio. La caratterizzazione delle sorgenti vibranti, ad esempio, è un fattore chiave per modellare la loro radiazione di pressione e questa proprietà rappresenta un requisito essenziale in casi specifici in cui la struttura in analisi è particolarmente fragile, ad esempio per strumenti musicali. Grazie alla collaborazione tra il nostro gruppo di ricerca e il Museo del Violino di Cremona, che ospita diversi strumenti a corda realizzati dal liutaio Antonio Stradivari, abbiamo acquisito l'esperienza di simulare e prevedere le proprietà dinamiche degli strumenti musicali. In particolare, abbiamo l'opportunità di studiare le proprietà acustiche di questi strumenti musicali e di sviluppare nuove tecniche per fornire misure non invasive. Per questo motivo, introduciamo un innovativo approccio data-driven basato su Convolutional Neural Networks (CNN) per stimare il campo di velocità sulla superficie di diverse tavole armoniche di violino in legno con geometria variabile, a partire dalle misure di pressione acquisite in loro prossimità. Questa è la prima volta che una strategia DL viene applicata nel contesto del NAH e ne dimostriamo l’efficacia rispetto ai metodi basati su modelli presenti in letteratura. Inoltre, approfittiamo della conoscenza preliminare del problema di propagazione sottostante rappresentato dall'equazione integrale di Kirchhoff-Helmholtz per aumentare l'accuratezza delle stime della CNN, fornendo così soluzioni interpretabili e fisicamente significative. Vale la pena notare come la caratterizzazione degli oggetti vibranti e le caratteristiche spaziali delle sorgenti sonore contribuiscano a una migliore progettazione, analisi e comprensione delle proprietà dei campi sonori. Tali informazioni sono tipicamente acquisite con schiere di microfoni, solitamente disposti in cluster o array all'interno dell'ambiente. Le tecniche di imaging acustico rappresentano un approccio innovativo per raccogliere e codificare le informazioni spaziali del campo sonoro. In particolare, consentono rappresentazioni visive dei campi acustici che possono essere facilmente studiate e manipolate. In questo elaborato, ci concentriamo sull'analisi di scene sonore in cui le sorgenti sono concentrate in uno spazio limitato, ad esempio per applicazioni di teleconferenza. Introduciamo poi un nuovo operatore lineare che mappa le caratteristiche acustiche catturate da un array di microfoni disposto intorno alla regione di interesse in un dominio compatto, indicato come "spazio angolare", consentendo così l'adozione di tecniche di ottimizzazione per estrarre le proprietà della sorgente. A differenza di altri metodi di imaging acustico, il modello ideato è in grado di combinare le informazioni locali di array di più microfoni distribuiti in una singola e unica rappresentazione, senza complesse operazioni di proiezione o triangolazione. Dimostriamo quindi la capacità dello spazio angolare di descrivere in modo efficiente le proprietà dei campi sonori del mondo reale, con esempi di applicazione di localizzazione della sorgente che adottano diverse configurazioni di sensori. Per quanto riguarda l'elaborazione dei campi sonori, gli scenari applicativi tipici si basano su un piccolo set di dati di pressione disponibili per estrarre i segnali target o identificare le caratteristiche acustiche intrinseche della scena sonora. Tuttavia, le nuove tecnologie in continua evoluzione nella comunicazione digitale immersiva richiedono la stima dell'intero campo acustico all'interno di un ambiente specifico. La ricostruzione del campo sonoro risolve questo problema recuperando i dati di pressione nelle posizioni target partendo da una piccola serie di osservazioni, richiedendo quindi specifiche procedure di interpolazione. Sebbene le soluzioni classiche si basino su descrizioni compresse ma intuitive del campo sonoro, esse necessitano di un compromesso fra precisione e flessibilità dei i modelli e i comportamenti reali dei campi acustici. Recentemente, gli approcci basati sui dati hanno dimostrato la loro capacità di ricostruire campi sonori grazie alle informazioni estratte dai dati disponibili. Tuttavia, la maggior parte di tali metodi DL forniscono stime accurate considerando solo l’ampiezza dei campi sonori e senza considerare la variazione di elevazione, cioè in 2D. Con questo manoscritto, puntiamo a recuperare l'intero campo acustico in una regione 3D target dell'ambiente, combinando le funzionalità di estrazione delle DNN con il modello di propagazione delle onde sonore rappresentate dall'equazione d'onda. Pertanto, definiamo una rete neurale nel dominio del tempo che è in grado di stimare i segnali desiderati da un insieme piccolo e sparso di misure reali, aumentando le prestazioni di ricostruzione grazie alle informazioni sulla fisica del sistema rispetto ai recenti approcci della letteratura. Un altro obbiettivo dell’elaborazione audio di particolare interesse è rappresentato dalla separazione vocale, che comporta l'isolamento e il miglioramento di uno specifico segnale di interesse. In questa tesi, ci concentriamo sull'estrazione della voce di un parlatore posizionato in un ambiente rumoroso e riverberante a partire dalle misure di un array lineare di microfoni. Questo trova applicazione, ad esempio, in scenari con dispositivi di interazione vocale. Gli algoritmi tradizionali si basano su tecniche di beamforming per filtrare spazialmente il campo sonoro proveniente da una direzione specifica o su recenti approcci basati sui dati per estrarre i segnali desiderati direttamente dai dati. Tuttavia, tali metodi mancano di generalizzazione rispetto alle diverse condizioni acustiche e, in generale, sono limitati dalle configurazioni di array di microfoni adottate nelle specifiche configurazioni di misura. Pertanto, proponiamo un nuovo modello che opera in tempo reale e che sfrutta la combinazione di operatori di beamforming e strategie basate sui dati. Grazie all'adozione di una rappresentazione beamspace basata su un insieme predefinito di direzioni orientabili, definiamo una CNN leggera che è agnostica rispetto alla configurazione di misura, cioè rispetto al numero di microfoni e alla distanza tra i sensori di misura. Dimostriamo che il metodo ideato è in grado di stimare il segnale desiderato anche quando il campo sonoro è catturato da una configurazione microfonica non utilizzata durante la fase di addestramento, fornendo così robuste capacità di generalizzazione in scenari reali. I risultati promettenti presentati in questa tesi dimostrano come la combinazione di caratteristiche estratte dai dati insieme alle conoscenze preliminare fornite dai modelli fisici rappresentino una tecnica all’avanguardia per le applicazioni audio. Gli approcci basati sui dati, sviluppati con l'ausilio del modello, consentono soluzioni efficienti con proprietà di precisione e flessibilità richieste nei dispositivi acustici di uso comune. Dapprima descriviamo accuratamente le proprietà spaziali della sorgente in rappresentazioni visive del campo sonoro per analizzare la scena acustica. Successivamente, forniamo robuste proprietà di generalizzazione rispetto a diverse condizioni di misura e un numero limitato di osservazioni per applicazioni in scenari reali. Pertanto, riteniamo che questa tesi possa essere considerata come il punto di partenza per una nuova classe di algoritmi per la caratterizzazione e la manipolazione dei campi sonori che traggono vantaggio dalla descrizione dei modelli acustici e dalle potenzialità di apprendimento delle reti neurali. Inoltre, prevediamo l'integrazione di questo framework anche per il rendering di ambienti acustici sintetizzati. Con i metodi basati sulla combinazione di dati e modelli fisici, si potranno sviluppare nuove soluzioni di elaborazione audio spazio-temporale per applicazioni audio immersive basati su sensori sempre più piccoli e distribuiti nello spazio.
Model-aided data-driven approaches for sound field analysis and processing
Marco, Olivieri
2024
Abstract
Sound field analysis and processing refers to the study of pressure waves for the characterization and manipulation of acoustic fields. It involves measuring, investigating, and extracting the acoustic properties within a specific environment such as rooms, offices, concert halls, and other enclosed areas. The knowledge of sound generation and propagation and the prediction of wave interactions with multiple objects is a highly active area of research within the signal-processing community. Indeed, the description of sound fields enables the creation of immersive and high-quality sound experiences in a wide range of applications, from the audio production and musical instrument industries to virtual reality and entertainment applications. In this thesis, we propose novel methodologies for both the analysis and processing of sound field taking advantages of the physical knowledge of acoustic models and the extracted information of available recordings. We define compact representations to encode the acoustic spatial information of sound sources and we develop data-efficient solutions for the interpolation and manipulation of the sound scenes. Specifically, we provide novel techniques for the characterization of vibrating sources and for the visualization of acoustic fields to effectively describe the interaction of acoustic entities within the space. Moreover, we face the problem of sound field reconstruction and speech separation with flexible solutions with respect to the specific microphone configuration adopted. The problem of analyzing and extrapolating the whole pressure field through microphone acquisitions has been widely studied in the literature. The existing solutions can be broadly classified into those based on physical models of the acoustic propagation and on data-driven approaches. In the first case, mathematical formulations of acoustic fields, such as plane-wave decomposition, are employed to describe the interactions of the acoustic system through a set of equations. Conversely, the second case relies on chained mathematical operations, e.g., Deep Neural Networks (DNNs), to learn a representation of the available measured data providing estimates of the desired solution. However, the main challenges of today's audio solutions are represented by the strong modeling assumptions that limit the accuracy and flexibility of model-based approaches and the generalization capabilities of classical data-driven methods, which typically depend on the quality and dimension of the dataset adopted during training. Therefore, new solutions are needed to overcome these limitations promoting the full description of acoustic fields even from a limited set of available measurements. Here, we present novel methodologies that combine the advantages of Deep Learning (DL) strategies and model-based methods in the context of sound field analysis and processing. The devised model-aided data-driven approaches are able to characterize the pressure radiation of vibrating sources representing acoustic features of the sound field in unique representations that can be investigated and manipulated to extract useful information. The main feature of the proposed solutions is represented by the integration of DNNs to exploit the available data with the prior knowledge of the underlying well-known signal models based on the physics of acoustics. In the context of sound field analysis, information about the properties of sound sources play a crucial role to describe the acoustics of a given space. The characterization of the vibrating sources, for example, is a key factor to model their pressure radiation and predict the interactions with nearby objects and surfaces. In this sense, Nearfield Acoustic Holography (NAH) enables accurate predictions of the vibrational field on sources in a fully contactless way. This property represents an essential requirement in specific cases where the structure under analysis is particularly fragile, e.g., for musical instruments. Thanks to the collaboration between our research group and Violin Museum settled in Cremona, Italy, which houses several stringed instruments made by the luthier Antonio Stradivari, we gained the experience of simulating and predicting the dynamic properties of musical instruments. In particular, we have the opportunity to study and investigate their acoustic properties and to develop new techniques for providing non-invasive measurements. For this reason, we introduce an innovative data-driven approach based on Convolutional Neural Networks (CNNs) to estimate the velocity field on the surface of different wood-made violin plates with variable outline and shapes starting from the pressure measurements acquired in their proximity. To the best of our knowledge, this is the first time a DL strategy is applied in the context of NAH and we prove its effectiveness with respect to model-based methods present in the literature. Moreover, we take advantage of the prior knowledge of the underlying propagation problem represented by the Kirchhoff-Helmholtz integral equation to increase the accuracy of the CNN estimates, thus providing explainable and physically meaningful solutions. It is worth noticing that the characterization of vibrating objects and the spatial features of the sound sources contribute to a better design, analysis, and understanding of the properties of sound fields. Such information is typically captured with multiple microphones, usually arranged in clusters or arrays within the environment. Acoustic imaging techniques represent a powerful approach to collect and encode the spatial information of the sound field. Specifically, they enable visual representations of the acoustic fields that can be easily investigated and manipulated. Here, we focus on the analysis of sound scenes where sources are concentrated in a confined space, e.g., for teleconferencing applications. We introduce a novel linear operator that maps the acoustic features captured by a circular microphone array surrounding the region of interest into a compact domain, denoted as "angular space", thus enabling the adoption of pattern analysis techniques to extract source properties. Differently from other acoustic imaging methods, the devised model is able to inherently combine the local information of multiple and distributed microphone arrays into a single and unique representation without complex projection or triangulation operations. We prove the ability of the angular space to efficiently describe the properties of real-world sound fields with examples of source localization application adopting different microphone setups. Regarding the processing of sound fields, typical application scenarios rely on a small set of available pressure data to extract target signals or identify hidden acoustic features of the sound scene. Nevertheless, the ever-evolving trends in immersive multimedia communication require the estimation of the whole acoustic field within a specific environment. Sound field reconstruction addresses this problem by recovering the pressure data in target locations starting from a small set of observations, thus requiring specific interpolation procedures. Although classical solutions are based on compressed and intuitive descriptions of the sound field, they are limited by the accuracy and flexibility trade-off between the simplified models and the actual behaviors of the acoustic fields. Recently, data-driven approaches proved their ability for the reconstruction of sound fields thanks to the extracted information from the available data. However, the majority of such DL methods provide accurate estimations considering only the magnitude of the sound fields and for height-invariant cases, i.e., in 2D. Here, we aim at recovering the entire acoustic field in a target 3D region of the environment by combining the feature extraction capabilities of DNNs with the propagation model of sound waves represented by the wave equation. Therefore, we define a time-domain neural network that is able to estimate the desired signals from a small and sparse set of real measurements increasing the reconstruction performance with respect to recent state-of-the-art approaches thanks to the physics-informed prior. Another audio processing task of particularly interest is represented by speech separation, which involves the isolation and enhancement of a specific speech signal of interest. In this thesis, we focus on the extraction of a target speaker placed in a noisy and reverberant environment from the mixture acquired by a linear microphone array, e.g., for hands-free interaction devices. Classical algorithms rely on beamforming techniques to spatially filter the sound field coming from a specific direction or on recent data-driven approaches to extract the desired signals directly from data. However, such methods lack of generalization with respect to different acoustic conditions and, in general, they are limited by the microphone array configurations adopted in the specific measurement setups. Therefore, we propose a novel real-time model that takes advantage of the combination of classical beamforming operators and data-driven strategies. Thanks to the adoption of a beamspace representation based on a predefined set of steerable directions, we define a lightweight CNN that is agnostic with respect to the measurement setup, i.e., number of microphone and inter-sensors distance of the array. We prove that the devised method is able to estimate the desired speaker even when the sound field is captured by a microphone array configuration not seen during the training phase, thus providing robust generalization capabilities in real-world scenarios. The promising results presented in this manuscript prove how combining features extracted from data with the prior knowledge given by the underlying physical models represents a powerful technique for audio applications. The devised model-aided data-driven approaches enable data-efficient solutions providing accuracy and flexibility properties required in daily-life acoustic devices. We accurately describe the source spatial properties into visual representations of the sound field to analyzed the acoustic scene. Successively, we provide robust generalization properties with respect to different measuring conditions and a limited number of observations for applications in practical scenarios. Therefore, we believe that this thesis can be considered as the starting point for a new class of algorithms for the characterization and manipulation of sound fields that take advantage form the description of the acoustic models and the learning potentialities of neural networks. Moreover, we envision the integration of this framework also for the rendering of synthesized acoustic environments. With the combination of model-based and data-driven methods, new space-time audio processing solutions will be developed for immersive audio applications that rely on a few increasingly smaller and spatially distributed sensors.File | Dimensione | Formato | |
---|---|---|---|
OLIVIERI_PHD_THESIS.pdf
Open Access dal 15/03/2025
Dimensione
85.8 MB
Formato
Adobe PDF
|
85.8 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/207219
URN:NBN:IT:POLIMI-207219