IN RECENT YEARS, LARGE LANGUAGE MODELS (LLMS) HAVE EMERGED AS POWERFUL TOOLS FOR NATURAL AND PROGRAMMING LANGUAGE PRO- CESSING. LLMS ARE TRAINED ON VAST AMOUNTS OF TEXT AND CODE DATA, ENABLING THEM TO UNDERSTAND AND GENERATE HUMAN-LIKE TEXT WITH HIGH ACCURACY. HOWEVER, DESPITE THEIR CAPABILITIES, THE OUTPUTS GENERATED BY LLMS ARE NOT ALWAYS RELIABLE OR ACCURATE, ESPECIALLY IN THE CONTEXT OF PROGRAMMING AND FORMAL LANGUAGES, WHERE PRECISION IS CRUCIAL. TO ADDRESS THIS CHALLENGE, SOME APPROACHES HAVE PROPOSED THE INTEGRATION OF VALIDATION STEPS INTO LLM-BASED PIPELINES.THE VALIDATION OF LLM OUTPUTS INVOLVES PROCEDURES TO ENSURE COMPLIANCE WITH TASK-SPECIFIC REQUIREMENTS. IN MADAAN ET AL., FOR INSTANCE, THE LLM-BASED SYSTEM VERIFIES THE GENERATED OUTPUT THROUGH A NEW LLM CALL AND, IN CASE OF ERRORS, PROVIDES FEEDBACK TO THE MODEL TO REFINE ITS OUTPUT ITERATIVELY.OTHER APPROACHES INCORPORATE DIFFERENT VALIDATION MECHANISMS, BASED ON STATIC ANALYSIS, COMPILER DIAGNOSTICS, OR UNIT TESTS.THIS THESIS EXPLORES VARIOUS VALIDATION DRIVEN FRAMEWORKS AND METHODOLOGIES THAT UTILIZE LLMS FOR PROGRAMMING LANGUAGE GENERATION AND TRANSLATION, AS WELL AS THEIR DEPLOYMENT IN EDUCATIONAL CONTEXTS FOR AUTOMATED CODE EVALUATION. THESE VALIDATION-CENTRIC WORKFLOWS FREQUENTLY INTEGRATE ITERATIVE FEEDBACK MECHANISMS, WHERE THE OUTCOMES OF THE VALIDATION PROCESS GUIDE SUBSEQUENT ITERATIONS OF GENERATION OR TRANSLATION, THEREBY IMPROVING THE ACCURACY AND DEPENDABILITY OF LLM OUTPUTS. IN THE FIRST PART, WE PRESENT THE USE OF VALIDATION DRIVEN WORKFLOWS FOR THE GENERATION AND TRANSLATION OF SEMI- STRUCTURED DATA AND PROGRAMMING LANGUAGES. WE PRESENT A VALI- DATION DRIVEN PIPELINE FOR THE GENERATION OF JSON-FHIR RESOURCES FROM CLINICAL NARRATIVES, WHICH INCORPORATES A VALIDATION-FEEDBACK LOOP, BASED ON THE OFFICIAL FHIR VALIDATOR, TO ITERATIVELY REFINE THE GENERATED RESOURCES.THEN, WE PROPOSE A VALIDATION-AWARE WORKFLOW FOR TRANSLATING CODE BETWEEN PROCEDURAL AND FUNCTIONAL PARADIGMS IN JAVASCRIPT, INTEGRATING STATIC ANALYSIS AND FEEDBACK WITHIN THE PROMPTING STRATEGY TO ENHANCE SEMANTIC PRESERVATION DURING TRANSLATION.WE ALSO ANALYSE A VALIDATION DRIVEN WORKFLOW FOR SYNTHESIZING FLUID PROGRAMS, DEMONSTRATING THAT VERIFICATION-AWARE PROMPTING STRATEGIES CAN ENHANCE THE CORRECTNESS OF GENERATED PROGRAMS.FLUID IS A PROGRAMMING LANGUAGE DESIGNED FOR DATA VISUALISATION TASKS, PARTICULARLY FOCUSING ON TRANSPARENCY.IN THE SECOND PART OF THIS DISSERTATION, WE EXPLORE THE APPLICATION OF LLMS IN EDUCATIONAL CONTEXTS, SPECIFICALLY FOR AUTOMATED CODE GRADING AND ASSESSMENT.WE PROPOSE A FRAMEWORK THAT LEVERAGES LLMS TO AUTOMATICALLY EVALUATE STUDENT SUBMISSIONS IN PROGRAMMING ASSIGNMENTS, EMPIRICALLY ANALYSING DIFFERENT LLMS.WE THEN EXTEND THE WORKFLOW AND THE INFORMATION USED IN THE GRADING PHASE, PROVIDING NOT ONLY THE CODE BUT ALSO THE EXPECTED OUTPUT, THE TEST RESULTS, THE COMPILER FEEDBACK, AND STATIC ANALYSIS RESULTS.RESULTS SHOW THAT LLM-GENERATED GRADES DIFFER FROM HUMAN GRADES BY ABOUT 1.15 POINTS OUT OF 10, AND THAT THE INCLUSION OF ADDITIONAL INFORMATION DURING THE GRADING PHASE IMPROVES THE PERFORMANCE OF LLMS IN CODE ASSESSMENT TASKS.FINALLY, WE DISCUSS THE IMPACT OF LLMS IN EDUCATIONAL CONTEXTS, HIGHLIGHTING THE OVERTHRUST PHENOMENON AMONG STUDENTS, WHO TEND TO OVER- RELY ON LLM OUTPUTS INSTEAD OF USING THEM AS SUP- PORT TOOLS. MOREOVER, WE ALSO ANALYSE THE ENVIRONMENTAL IMPACT OF CODE MODELS, ASSESSING THE CARBON FOOTPRINT REPORTED IN THE LITERATURE.THE RESULTS SHOW THAT SEVERAL STUDIES DO NOT ATTEMPT TO ESTIMATE THEIR IMPACT AND GENERALLY REPORT ONLY TRAINING INFORMATION; WITH THIS INFORMATION IT IS DIFFICULT TO PRECISELY ESTIMATE THE IMPACT, ALTHOUGH WE PROVIDE APPROXIMATE ESTIMATES.FUTURE WORK INCLUDES EXTENDING THE ANALYSIS OF LLMS TO ADDITIONAL PROGRAMMING LANGUAGES, FORMAL LANGUAGES, AND PARADIGMS, INVESTIGATING NON-LLM AI MODELS FOR LANGUAGE-SPECIFIC TASKS, AND DEVELOPING RIGOROUS, EXPLAINABLE TECHNIQUES FOR AUTOMATED CODE GRADING AND ASSESSMENT.
NEGLI ULTIMI ANNI, I LARGE LANGUAGE MODELS (LLM) HANNO DIMOSTRATO NOTEVOLI CAPACITÀ NELL’ELABORAZIONE DEL LINGUAGGIO NATURALE E DEI LINGUAGGI DI PROGRAMMAZIONE. ADDESTRATI SU GRANDI QUANTITÀ DI DATI E CODICE, QUESTI MODELLI SONO IN GRADO DI COMPRENDERE E GENERARE TESTO CON ELEVATA ACCURATEZZA. TUTTAVIA, NONOSTANTE TALI CAPACITÀ, L’OUTPUT PRODOTTO DAI LLM NON È SEMPRE AFFIDABILE, IN PARTICOLARE NEL CONTESTO DEI LINGUAGGI DI PROGRAMMAZIONE E DEI LINGUAGGI FORMALI, DOVE LA PRECISIONE È UN REQUISITO FONDAMENTALE. PER AFFRONTARE TALI LIMITAZIONI, LA LETTERATURA RECENTE PROPONE L’INTEGRAZIONE DI PROCESSI DI VALIDAZIONE ALL’INTERNO DELLE PIPELINE BASATE SU LLM, AL FINE DI VERIFICARE IL RISPETTO DI REQUISITI SINTATTICI E SEMANTICI SPECIFICI. IN QUESTO CONTESTO SI COLLOCANO I WORKFLOW GUIDATI DALLA VALIDAZIONE, CHE SFRUTTANO MECCANISMI DI FEEDBACK ITERATIVI PER MIGLIORARE PROGRESSIVAMENTE LA QUALITÀ DELL’OUTPUT GENERATO. QUESTA TESI ANALIZZA FRAMEWORK E METODOLOGIE BASATI SULLA VALIDAZIONE CHE IMPIEGANO I LLM PER LA GENERAZIONE E LA TRADUZIONE DI LINGUAGGI DI PROGRAMMAZIONE, NONCHÉ LA LORO APPLICAZIONE IN AMBITO EDUCATIVO PER LA VALUTAZIONE AUTOMATIZZATA DEL CODICE. I WORKFLOW PROPOSTI INTEGRANO PROCEDURE DI VALIDAZIONE CHE GUIDANO LE SUCCESSIVE ITERAZIONI DI GENERAZIONE E TRADUZIONE, INCREMENTANDO L’ACCURATEZZA E L’AFFIDABILITÀ DEI RISULTATI. NELLA PRIMA PARTE DELLA TESI VIENE STUDIATO L’UTILIZZO DI WORKFLOW GUIDATI DALLA VALIDAZIONE PER LA GENERAZIONE E LA TRADUZIONE DI DATI SEMI-STRUTTURATI E LINGUAGGI DI PROGRAMMAZIONE. IN PARTICOLARE, VIENE PRESENTATA UNA PIPELINE PER LA GENERAZIONE DI RISORSE JSON-FHIR A PARTIRE DA NARRAZIONI CLINICHE, BASATA SU UN CICLO DI FEEDBACK CHE UTILIZZA IL VALIDATORE FHIR UFFICIALE. VIENE INOLTRE ANALIZZATO UN WORKFLOW PER LA TRADUZIONE DI CODICE TRA PARADIGMI PROCEDURALI E FUNZIONALI IN JAVASCRIPT, CHE INTEGRA ANALISI STATICA E FEEDBACK DELL’INTERPRETE PER MIGLIORARE LA PRESERVAZIONE SEMANTICA. INFINE, VIENE CONSIDERATO UN WORKFLOW PER LA SINTESI DI PROGRAMMI IN FLUID, UN LINGUAGGIO ORIENTATO ALLA VISUALIZZAZIONE DEI DATI, MOSTRANDO COME STRATEGIE DI PROMPTING CONSAPEVOLI DELLA VERIFICA POSSANO MIGLIORARE LA CORRETTEZZA DEI PROGRAMMI GENERATI. LA SECONDA PARTE DELLA TESI ESPLORA L’IMPIEGO DEI LLM IN CONTESTI EDUCATIVI PER LA VALUTAZIONE AUTOMATICA DEL CODICE. VIENE PROPOSTO UN FRAMEWORK PER LA VALUTAZIONE DELLE CONSEGNE DEGLI STUDENTI IN ESERCITAZIONI DI PROGRAMMAZIONE, SUCCESSIVAMENTE ESTESO INCLUDENDO INFORMAZIONI AGGIUNTIVE QUALI OUTPUT ATTESO, RISULTATI DEI TEST, FEEDBACK DEL COMPILATORE E ANALISI STATICA. I RISULTATI SPERIMENTALI MOSTRANO CHE I VOTI GENERATI DAI LLM DIFFERISCONO DA QUELLI UMANI DI CIRCA 1,15 PUNTI SU 10 E CHE L’UTILIZZO DI INFORMAZIONI AGGIUNTIVE MIGLIORA LE PRESTAZIONI DEI MODELLI. INFINE, VIENE DISCUSSO L’IMPATTO DEI LLM IN AMBITO EDUCATIVO, CON PARTICOLARE ATTENZIONE AL FENOMENO DELL’OVERTRUST DA PARTE DEGLI STUDENTI, E VIENE ANALIZZATO L’IMPATTO AMBIENTALE DEI MODELLI PER IL CODICE, EVIDENZIANDO LE DIFFICOLTÀ DI STIMA DELL’IMPRONTA DI CARBONIO A PARTIRE DALLE INFORMAZIONI ATTUALMENTE DISPONIBILI IN LETTERATURA.
VALIDATION-DRIVEN LLM ARCHITECTURES FOR CODE GENERATION, TRANSLATION, AND AUTOMATED GRADING
Piscitelli, Alfonso
2026
Abstract
IN RECENT YEARS, LARGE LANGUAGE MODELS (LLMS) HAVE EMERGED AS POWERFUL TOOLS FOR NATURAL AND PROGRAMMING LANGUAGE PRO- CESSING. LLMS ARE TRAINED ON VAST AMOUNTS OF TEXT AND CODE DATA, ENABLING THEM TO UNDERSTAND AND GENERATE HUMAN-LIKE TEXT WITH HIGH ACCURACY. HOWEVER, DESPITE THEIR CAPABILITIES, THE OUTPUTS GENERATED BY LLMS ARE NOT ALWAYS RELIABLE OR ACCURATE, ESPECIALLY IN THE CONTEXT OF PROGRAMMING AND FORMAL LANGUAGES, WHERE PRECISION IS CRUCIAL. TO ADDRESS THIS CHALLENGE, SOME APPROACHES HAVE PROPOSED THE INTEGRATION OF VALIDATION STEPS INTO LLM-BASED PIPELINES.THE VALIDATION OF LLM OUTPUTS INVOLVES PROCEDURES TO ENSURE COMPLIANCE WITH TASK-SPECIFIC REQUIREMENTS. IN MADAAN ET AL., FOR INSTANCE, THE LLM-BASED SYSTEM VERIFIES THE GENERATED OUTPUT THROUGH A NEW LLM CALL AND, IN CASE OF ERRORS, PROVIDES FEEDBACK TO THE MODEL TO REFINE ITS OUTPUT ITERATIVELY.OTHER APPROACHES INCORPORATE DIFFERENT VALIDATION MECHANISMS, BASED ON STATIC ANALYSIS, COMPILER DIAGNOSTICS, OR UNIT TESTS.THIS THESIS EXPLORES VARIOUS VALIDATION DRIVEN FRAMEWORKS AND METHODOLOGIES THAT UTILIZE LLMS FOR PROGRAMMING LANGUAGE GENERATION AND TRANSLATION, AS WELL AS THEIR DEPLOYMENT IN EDUCATIONAL CONTEXTS FOR AUTOMATED CODE EVALUATION. THESE VALIDATION-CENTRIC WORKFLOWS FREQUENTLY INTEGRATE ITERATIVE FEEDBACK MECHANISMS, WHERE THE OUTCOMES OF THE VALIDATION PROCESS GUIDE SUBSEQUENT ITERATIONS OF GENERATION OR TRANSLATION, THEREBY IMPROVING THE ACCURACY AND DEPENDABILITY OF LLM OUTPUTS. IN THE FIRST PART, WE PRESENT THE USE OF VALIDATION DRIVEN WORKFLOWS FOR THE GENERATION AND TRANSLATION OF SEMI- STRUCTURED DATA AND PROGRAMMING LANGUAGES. WE PRESENT A VALI- DATION DRIVEN PIPELINE FOR THE GENERATION OF JSON-FHIR RESOURCES FROM CLINICAL NARRATIVES, WHICH INCORPORATES A VALIDATION-FEEDBACK LOOP, BASED ON THE OFFICIAL FHIR VALIDATOR, TO ITERATIVELY REFINE THE GENERATED RESOURCES.THEN, WE PROPOSE A VALIDATION-AWARE WORKFLOW FOR TRANSLATING CODE BETWEEN PROCEDURAL AND FUNCTIONAL PARADIGMS IN JAVASCRIPT, INTEGRATING STATIC ANALYSIS AND FEEDBACK WITHIN THE PROMPTING STRATEGY TO ENHANCE SEMANTIC PRESERVATION DURING TRANSLATION.WE ALSO ANALYSE A VALIDATION DRIVEN WORKFLOW FOR SYNTHESIZING FLUID PROGRAMS, DEMONSTRATING THAT VERIFICATION-AWARE PROMPTING STRATEGIES CAN ENHANCE THE CORRECTNESS OF GENERATED PROGRAMS.FLUID IS A PROGRAMMING LANGUAGE DESIGNED FOR DATA VISUALISATION TASKS, PARTICULARLY FOCUSING ON TRANSPARENCY.IN THE SECOND PART OF THIS DISSERTATION, WE EXPLORE THE APPLICATION OF LLMS IN EDUCATIONAL CONTEXTS, SPECIFICALLY FOR AUTOMATED CODE GRADING AND ASSESSMENT.WE PROPOSE A FRAMEWORK THAT LEVERAGES LLMS TO AUTOMATICALLY EVALUATE STUDENT SUBMISSIONS IN PROGRAMMING ASSIGNMENTS, EMPIRICALLY ANALYSING DIFFERENT LLMS.WE THEN EXTEND THE WORKFLOW AND THE INFORMATION USED IN THE GRADING PHASE, PROVIDING NOT ONLY THE CODE BUT ALSO THE EXPECTED OUTPUT, THE TEST RESULTS, THE COMPILER FEEDBACK, AND STATIC ANALYSIS RESULTS.RESULTS SHOW THAT LLM-GENERATED GRADES DIFFER FROM HUMAN GRADES BY ABOUT 1.15 POINTS OUT OF 10, AND THAT THE INCLUSION OF ADDITIONAL INFORMATION DURING THE GRADING PHASE IMPROVES THE PERFORMANCE OF LLMS IN CODE ASSESSMENT TASKS.FINALLY, WE DISCUSS THE IMPACT OF LLMS IN EDUCATIONAL CONTEXTS, HIGHLIGHTING THE OVERTHRUST PHENOMENON AMONG STUDENTS, WHO TEND TO OVER- RELY ON LLM OUTPUTS INSTEAD OF USING THEM AS SUP- PORT TOOLS. MOREOVER, WE ALSO ANALYSE THE ENVIRONMENTAL IMPACT OF CODE MODELS, ASSESSING THE CARBON FOOTPRINT REPORTED IN THE LITERATURE.THE RESULTS SHOW THAT SEVERAL STUDIES DO NOT ATTEMPT TO ESTIMATE THEIR IMPACT AND GENERALLY REPORT ONLY TRAINING INFORMATION; WITH THIS INFORMATION IT IS DIFFICULT TO PRECISELY ESTIMATE THE IMPACT, ALTHOUGH WE PROVIDE APPROXIMATE ESTIMATES.FUTURE WORK INCLUDES EXTENDING THE ANALYSIS OF LLMS TO ADDITIONAL PROGRAMMING LANGUAGES, FORMAL LANGUAGES, AND PARADIGMS, INVESTIGATING NON-LLM AI MODELS FOR LANGUAGE-SPECIFIC TASKS, AND DEVELOPING RIGOROUS, EXPLAINABLE TECHNIQUES FOR AUTOMATED CODE GRADING AND ASSESSMENT.| File | Dimensione | Formato | |
|---|---|---|---|
|
phd_thesis__1_.pdf
embargo fino al 23/02/2027
Licenza:
Tutti i diritti riservati
Dimensione
3.74 MB
Formato
Adobe PDF
|
3.74 MB | Adobe PDF | |
|
abstract-tesi.pdf
embargo fino al 23/02/2027
Licenza:
Tutti i diritti riservati
Dimensione
60.53 kB
Formato
Adobe PDF
|
60.53 kB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/358174
URN:NBN:IT:UNISA-358174