Data preparation—which includes discovery, integration, transformation, cleaning, and enrichment of the data—remains one of the most difficult and time-consuming activities for data scientists. Its iterative and context-dependent nature requires domain expertise and significant human and computational resources. In recent years, research and industry have focused on improving individual stages, optimizing specific operations, and introducing tools and frameworks that will enhance efficiency and sustainability, promoting the reuse of processes and operations to reduce time and costs. Yet in practice, it remains a manual and fragmented process. Experts continue to use ad-hoc pipelines, which are rarely reviewed or optimized. This is also due to the lack of tools that support users in choosing between different alternatives and designing adaptive pipelines capable of balancing quality, performance, and costs based on the dataset and task. At the same time, the introduction of solutions based on Artificial Intelligence and, in particular, Large Language Models (LLMs) has shown promising results in improving various stages of this process. These models offer new opportunities for democratizing this activity, allowing even non-expert users to interact in natural language and automate the most complex stages of the process. However, current solutions remain difficult to generalize, expensive, and with transparency and verifiability issues that limit their large-scale adoption. This thesis addresses these challenges by proposing an integrated approach based on three complementary dimensions: efficiency, sustainability, and automation. Our first contribution covers an extensive evaluation of existing open source libraries for building data preparation pipelines. For this purpose, we developed Bento, a framework that allows the definition and execution of preparation pipelines compatible with the main Python libraries, offering a systematic comparison of performance, scalability, and resource consumption. The goal is to provide practical guidelines to help data scientists choose the most suitable library based on the dataset to be prepared, the available hardware, and the operations to be performed. Next, we propose a unified evaluation model to analyze and improve the sustainability of data preparation, integrating the principles of the circular economy. This approach shows, through comparative analysis, how it is possible to balance data quality with environmental, economic, and social costs. Finally, the thesis extends the traditional view of data preparation, highlighting how discovering and building new datasets are crucial for more complete and reliable analysis, since, for real-world scenarios, the available data is rarely sufficient. It then explores the role of LLMs as autonomous agents that can assist and automate these operations. We developed LakeGen, an auditable multi-agent system that allows interaction with open data in natural language and translates requests into executable pipelines by autonomously integrating tables from heterogeneous sources. To evaluate its effectiveness, we introduced OrQA, a complementary framework that generates reference datasets composed of verified SQL-query-answer triplets, enabling structured performance analysis.
La preparazione dei dati – che comprende la scoperta, l’integrazione, la trasformazione, la pulizia e l’arricchimento dei dati – rimane una delle attività più onerose per i data scientist. La sua natura iterativa e dipendente dal contesto richiede competenze di dominio e un notevole impiego di risorse umane e computazionali. Negli ultimi anni, la ricerca e l’industria si sono concentrate sul miglioramento delle singole fasi, ottimizzando operazioni specifiche e introducendo strumenti e framework che migliorano l’efficienza e la sostenibilità, promuovendo il riuso di processi e operazioni per ridurre i tempi e i costi. Nella pratica, tuttavia, resta un processo manuale e frammentato. Gli esperti continuano a utilizzare pipeline ad hoc, raramente riviste o ottimizzate. Ciò è dovuto anche alla mancanza di strumenti che supportino la scelta tra le diverse alternative e la progettazione di pipeline adattive, capaci di bilanciare qualità, prestazioni e costi in base al dataset e al compito. Parallelamente, l’introduzione di soluzioni basate su Intelligenza Artificiale e, in particolare, sui Large Language Models (LLMs) ha mostrato risultati promettenti nel migliorare le diverse fasi di tale processo. Questi modelli aprono nuove prospettive verso la democratizzazione di tale attività, consentendo anche a utenti non esperti di interagire in linguaggio naturale e automatizzare le fasi più complesse del processo. Tuttavia, le soluzioni attuali restano difficili da generalizzare, costose e con problemi di trasparenza e verificabilità che ne limitano l’adozione su larga scala. Questa tesi affronta tali sfide proponendo un approccio integrato fondato su tre dimensioni complementari: efficienza, sostenibilità e automazione. Il primo contributo discusso riguarda la valutazione estensiva delle librerie open source esistenti per la costruzione di pipeline di preparazione dei dati. A tale scopo è stato sviluppato Bento, un framework che consente di definire ed eseguire pipeline compatibili con le principali librerie Python, offrendo un confronto sistematico su prestazioni, scalabilità e consumo di risorse. L’obiettivo è fornire linee guida pratiche che aiutino i data scientist a scegliere la libreria più adatta in base al dataset da preparare, all’hardware a disposizione e alle operazioni da eseguire. Successivamente, viene proposto un modello di valutazione unificato per analizzare e migliorare la sostenibilità della preparazione dei dati, integrando i principi dell’economia circolare. Questo approccio mostra, attraverso un’analisi comparativa, come sia possibile bilanciare la qualità dei dati con i costi ambientali, economici e sociali. Infine, viene ampliata la visione tradizionale della preparazione dei dati, sottolineando che la scoperta e costruzione di nuovi dataset è essenziale per analisi più complete e affidabili, poiché nei contesti reali i dati disponibili raramente sono sufficienti. Viene quindi esplorato il ruolo dei LLMs come agenti autonomi capaci di assistere e automatizzare tali operazioni. È stato sviluppato LakeGen, un sistema multi-agente che consente di interagire con gli open data in linguaggio naturale e tradurre le richieste in pipeline eseguibili, integrando in modo autonomo tabelle provenienti da sorgenti eterogenee. Per dimostrarne l'efficacia, è stato realizzato OrQA, un framework complementare progettato per generare automaticamente dataset di benchmark composti da triplette verificate di query SQL, domanda e risposta, consentendo un'analisi strutturata delle prestazioni.
Preparazione dei Dati per IA: da pipeline efficienti a sistemi agentici
MOZZILLO, ANGELO
2026
Abstract
Data preparation—which includes discovery, integration, transformation, cleaning, and enrichment of the data—remains one of the most difficult and time-consuming activities for data scientists. Its iterative and context-dependent nature requires domain expertise and significant human and computational resources. In recent years, research and industry have focused on improving individual stages, optimizing specific operations, and introducing tools and frameworks that will enhance efficiency and sustainability, promoting the reuse of processes and operations to reduce time and costs. Yet in practice, it remains a manual and fragmented process. Experts continue to use ad-hoc pipelines, which are rarely reviewed or optimized. This is also due to the lack of tools that support users in choosing between different alternatives and designing adaptive pipelines capable of balancing quality, performance, and costs based on the dataset and task. At the same time, the introduction of solutions based on Artificial Intelligence and, in particular, Large Language Models (LLMs) has shown promising results in improving various stages of this process. These models offer new opportunities for democratizing this activity, allowing even non-expert users to interact in natural language and automate the most complex stages of the process. However, current solutions remain difficult to generalize, expensive, and with transparency and verifiability issues that limit their large-scale adoption. This thesis addresses these challenges by proposing an integrated approach based on three complementary dimensions: efficiency, sustainability, and automation. Our first contribution covers an extensive evaluation of existing open source libraries for building data preparation pipelines. For this purpose, we developed Bento, a framework that allows the definition and execution of preparation pipelines compatible with the main Python libraries, offering a systematic comparison of performance, scalability, and resource consumption. The goal is to provide practical guidelines to help data scientists choose the most suitable library based on the dataset to be prepared, the available hardware, and the operations to be performed. Next, we propose a unified evaluation model to analyze and improve the sustainability of data preparation, integrating the principles of the circular economy. This approach shows, through comparative analysis, how it is possible to balance data quality with environmental, economic, and social costs. Finally, the thesis extends the traditional view of data preparation, highlighting how discovering and building new datasets are crucial for more complete and reliable analysis, since, for real-world scenarios, the available data is rarely sufficient. It then explores the role of LLMs as autonomous agents that can assist and automate these operations. We developed LakeGen, an auditable multi-agent system that allows interaction with open data in natural language and translates requests into executable pipelines by autonomously integrating tables from heterogeneous sources. To evaluate its effectiveness, we introduced OrQA, a complementary framework that generates reference datasets composed of verified SQL-query-answer triplets, enabling structured performance analysis.| File | Dimensione | Formato | |
|---|---|---|---|
|
Mozzillo.pdf
embargo fino al 29/09/2027
Licenza:
Tutti i diritti riservati
Dimensione
4.45 MB
Formato
Adobe PDF
|
4.45 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/362895
URN:NBN:IT:UNIMORE-362895