The advent of deep learning has profoundly changed the field of signal processing. With the surge in large audio datasets, this paradigm shift has fundamentally transformed how we analyze, synthesize, and process sound. Machine learning methods, and deep neural networks in particular, have enjoyed remarkable success in tasks such as sound classification, music generation, and audio modeling. By leveraging a data-driven approach, these models achieve unprecedented levels of accuracy and contextual understanding, enabling the automation of complex tasks that were once dependent on human expertise or handcrafted algorithms. The impact of machine learning is perhaps most evident in the analysis and classification of everyday sounds, where the access to large collections of audio data have enabled models to recognize patterns that were previously difficult or outright impossible to capture through the human senses. The ability to apply these models to real-world scenarios, from environmental monitoring to smart home systems, has demonstrated the immense potential of these technologies. Similarly, in music processing, deep learning has lead to advancements of countless applications including source separation, signal enhancement, data compression, and music composition. The rapid evolution of these fields highlights how deeply integrated machine learning has become in reshaping the landscape of audio research. While deep learning models can perform exceptionally well when a massive amount of training examples is available, they often struggle in specialized scenarios where data are scarce or expensive to obtain. Furthermore, their performance is highly dependent on the quality of the data they are trained on, and they can sometimes fail to generalize to unseen domains or edge cases. At the same time, the inherent "black-box" nature of many machine learning algorithms makes it challenging to interpret their decision-making processes, raising concerns regarding their reliability and trustworthiness in safety-critical applications. For these reasons, many experts to this day caution against the tendency of abstracting well-understood audio-related tasks behind overengineered deep learning systems, thereby neglecting over a century of research in acoustics and signal processing whose findings may instead continue to offer valuable insight, even in the era of Big Data. In this thesis, we focus on three key areas of data-driven audio signal processing: everyday sound classification, music signal processing, and audio system modeling. The original contributions included in this manuscript integrate machine learning with principled modeling approaches, introducing innovative methods that, at the time of publication, advanced the state of the art in the respective fields.
Il deep learning ha avuto un impatto profondo nel campo dell'elaborazione del segnale. Con l'aumento della disponibilità di grandi dataset audio, questo cambio di paradigma ha trasformato in modo radicale il modo in cui analizziamo, sintetizziamo e processiamo il suono. I metodi di machine learning, in particolare le reti neurali, hanno ottenuto successi notevoli in ambiti come la classificazione del suono, la generazione musicale e la modellazione audio. Grazie a un approccio basato sui dati, questi modelli raggiungono livelli di precisione e comprensione contestuale mai visti prima, permettendo l'automazione di compiti complessi che in passato dipendevano dall'intervento umano o algoritmi dedicati. L'impatto del machine learning è particolarmente evidente nell'analisi e classificazione dei suoni ambientali, dove l'accesso a grandi raccolte di dati audio ha consentito di identificare pattern che prima erano difficili, se non impossibili, da individuare. La capacità di applicare questi modelli a problemi reali, come il monitoraggio ambientale o la domotica, ha mostrato l'enorme potenziale di queste tecnologie. Allo stesso modo, nell'ambito dell'elaborazione musicale, il deep learning ha portato a progressi in numerose applicazioni, tra cui la separazione di sorgenti sonore, la compressione di dati e la composizione automatica. La rapida evoluzione di questi settori evidenzia quanto il machine learning sia ormai divenuto centrale nel panorama della ricerca audio. Sebbene i modelli di deep learning possano eccellere quando vi è disponibilità di una grande quantità di dati per l'addestramento, spesso incontrano difficoltà in scenari specializzati, dove i dati sono scarsi o difficili da ottenere. Inoltre, le loro prestazioni dipendono fortemente dalla qualità dei dati e spesso falliscono in scenari mai visti prima o in casi limite. Infine, la natura "a scatola nera" di molti algoritmi rende difficile comprendere i loro processi decisionali, sollevando dubbi sulla loro affidabilità e sicurezza, soprattutto in applicazioni critiche. Per questi motivi, molti esperti mettono in guardia contro l'affidarsi a sistemi di deep learning eccessivamente complessi ignorando oltre un secolo di ricerca in acustica e elaborazione del segnale, i cui risultati potrebbero invece continuare a offrire spunti preziosi nell'era dei Big Data. In questa tesi, ci concentriamo su tre aree chiave dell'elaborazione dei segnali audio basata sui dati: la classificazione dei suoni ambientali, l'elaborazione del segnale musicale e la modellazione di sistemi audio. I contributi originali inclusi in questo lavoro combinano il machine learning con approcci di modellazione più tradizionali, proponendo metodi innovativi che, al momento della loro pubblicazione, hanno rappresentato un avanzamento dello stato dell'arte nei rispettivi ambiti.
Advances in data-driven audio signal processing
ALESSANDRO ILIC, MEZZA
2025
Abstract
The advent of deep learning has profoundly changed the field of signal processing. With the surge in large audio datasets, this paradigm shift has fundamentally transformed how we analyze, synthesize, and process sound. Machine learning methods, and deep neural networks in particular, have enjoyed remarkable success in tasks such as sound classification, music generation, and audio modeling. By leveraging a data-driven approach, these models achieve unprecedented levels of accuracy and contextual understanding, enabling the automation of complex tasks that were once dependent on human expertise or handcrafted algorithms. The impact of machine learning is perhaps most evident in the analysis and classification of everyday sounds, where the access to large collections of audio data have enabled models to recognize patterns that were previously difficult or outright impossible to capture through the human senses. The ability to apply these models to real-world scenarios, from environmental monitoring to smart home systems, has demonstrated the immense potential of these technologies. Similarly, in music processing, deep learning has lead to advancements of countless applications including source separation, signal enhancement, data compression, and music composition. The rapid evolution of these fields highlights how deeply integrated machine learning has become in reshaping the landscape of audio research. While deep learning models can perform exceptionally well when a massive amount of training examples is available, they often struggle in specialized scenarios where data are scarce or expensive to obtain. Furthermore, their performance is highly dependent on the quality of the data they are trained on, and they can sometimes fail to generalize to unseen domains or edge cases. At the same time, the inherent "black-box" nature of many machine learning algorithms makes it challenging to interpret their decision-making processes, raising concerns regarding their reliability and trustworthiness in safety-critical applications. For these reasons, many experts to this day caution against the tendency of abstracting well-understood audio-related tasks behind overengineered deep learning systems, thereby neglecting over a century of research in acoustics and signal processing whose findings may instead continue to offer valuable insight, even in the era of Big Data. In this thesis, we focus on three key areas of data-driven audio signal processing: everyday sound classification, music signal processing, and audio system modeling. The original contributions included in this manuscript integrate machine learning with principled modeling approaches, introducing innovative methods that, at the time of publication, advanced the state of the art in the respective fields.File | Dimensione | Formato | |
---|---|---|---|
Alessandro_Ilic_Mezza_PhD_dissertation_09_04_2025.pdf
accesso solo da BNCF e BNCR
Dimensione
37.06 MB
Formato
Adobe PDF
|
37.06 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/220087
URN:NBN:IT:POLIMI-220087