THE RAPID PROLIFERATION OF ARTIFICIAL INTELLIGENCE (AI) SYSTEMS FOR CLASSIFICATION AND PREDICTION HAS TRANSFORMED BOTH SCIENTIFIC RESEARCH AND INDUSTRIAL APPLICATIONS. WHILE THESE TECHNOLOGIES HAVE DEMONSTRATED REMARKABLE PERFORMANCE, THEIR DEPLOYMENT IN SAFETY-CRITICAL CONTEXTS RAISES FUNDAMENTAL CONCERNS REGARDING THE RELIABILITY, ROBUSTNESS, AND TRUSTWORTHINESS OF THEIR OUTPUTS. IN PARTICULAR, INACCURATE ESTIMATIONS MAY RESULT IN HIGH-RISK OR EVEN LIFE-THREATENING CONSEQUENCES, UNDERSCORING THE NEED FOR RIGOROUS METHODS TO QUANTIFY AND ASSESS THE UNCERTAINTY ASSOCIATED WITH AI-DRIVEN ESTIMATES. THIS PROJECT ADDRESSES THESE CHALLENGES BY INTRODUCING A NOVEL METHODOLOGY GROUNDED IN THE LAW OF PROPAGATION OF UNCERTAINTY (LPU). THE APPROACH IS DESIGNED TO PROVIDE FAST AND SIMULTANEOUS INFERENCE OF MEASUREMENT UNCERTAINTY IN ARTIFICIAL NEURAL NETWORKS (ANNS), THEREBY ENABLING SYSTEMATIC EVALUATION OF THE EFFECTS OF UNCERTAINTY PROPAGATION FROM INPUT DATA TO FINAL OUTPUTS. UNLIKE CONVENTIONAL BLACK-BOX TREATMENTS OF NEURAL NETWORKS, THE PROPOSED METHODOLOGY SEEKS TO EXPLICITLY ANALYZE THEIR UNDERLYING MATHEMATICAL STRUCTURES AND ACCOUNT FOR THE NONLINEARITIES THAT COMPLICATE DIRECT APPLICATION OF STANDARD UNCERTAINTY ANALYSIS. BY INCORPORATING LPU INTO THE INFERENCE PROCESS, THIS WORK ENABLES THE PROPAGATION OF INPUT MEASUREMENT UNCERTAINTY THROUGH THE NETWORK ARCHITECTURE, YIELDING AN ANALYTICAL ESTIMATION OF OUTPUT UNCERTAINTY. THIS PROVIDES A RIGOROUS FRAMEWORK ALIGNED WITH ISO-GUM. THE ULTIMATE GOAL IS TO ENHANCE THE INTERPRETABILITY, TRANSPARENCY, AND RELIABILITY OF ANN-BASED SYSTEMS, PARTICULARLY IN DOMAINS WHERE DECISION-MAKING DEPENDS CRITICALLY ON THE VALIDITY OF PREDICTIONS.

MEASUREMENT UNCERTAINTY IN ARTIFICIAL INTELLIGENCE

LAINO, VALTER
2026

Abstract

THE RAPID PROLIFERATION OF ARTIFICIAL INTELLIGENCE (AI) SYSTEMS FOR CLASSIFICATION AND PREDICTION HAS TRANSFORMED BOTH SCIENTIFIC RESEARCH AND INDUSTRIAL APPLICATIONS. WHILE THESE TECHNOLOGIES HAVE DEMONSTRATED REMARKABLE PERFORMANCE, THEIR DEPLOYMENT IN SAFETY-CRITICAL CONTEXTS RAISES FUNDAMENTAL CONCERNS REGARDING THE RELIABILITY, ROBUSTNESS, AND TRUSTWORTHINESS OF THEIR OUTPUTS. IN PARTICULAR, INACCURATE ESTIMATIONS MAY RESULT IN HIGH-RISK OR EVEN LIFE-THREATENING CONSEQUENCES, UNDERSCORING THE NEED FOR RIGOROUS METHODS TO QUANTIFY AND ASSESS THE UNCERTAINTY ASSOCIATED WITH AI-DRIVEN ESTIMATES. THIS PROJECT ADDRESSES THESE CHALLENGES BY INTRODUCING A NOVEL METHODOLOGY GROUNDED IN THE LAW OF PROPAGATION OF UNCERTAINTY (LPU). THE APPROACH IS DESIGNED TO PROVIDE FAST AND SIMULTANEOUS INFERENCE OF MEASUREMENT UNCERTAINTY IN ARTIFICIAL NEURAL NETWORKS (ANNS), THEREBY ENABLING SYSTEMATIC EVALUATION OF THE EFFECTS OF UNCERTAINTY PROPAGATION FROM INPUT DATA TO FINAL OUTPUTS. UNLIKE CONVENTIONAL BLACK-BOX TREATMENTS OF NEURAL NETWORKS, THE PROPOSED METHODOLOGY SEEKS TO EXPLICITLY ANALYZE THEIR UNDERLYING MATHEMATICAL STRUCTURES AND ACCOUNT FOR THE NONLINEARITIES THAT COMPLICATE DIRECT APPLICATION OF STANDARD UNCERTAINTY ANALYSIS. BY INCORPORATING LPU INTO THE INFERENCE PROCESS, THIS WORK ENABLES THE PROPAGATION OF INPUT MEASUREMENT UNCERTAINTY THROUGH THE NETWORK ARCHITECTURE, YIELDING AN ANALYTICAL ESTIMATION OF OUTPUT UNCERTAINTY. THIS PROVIDES A RIGOROUS FRAMEWORK ALIGNED WITH ISO-GUM. THE ULTIMATE GOAL IS TO ENHANCE THE INTERPRETABILITY, TRANSPARENCY, AND RELIABILITY OF ANN-BASED SYSTEMS, PARTICULARLY IN DOMAINS WHERE DECISION-MAKING DEPENDS CRITICALLY ON THE VALIDITY OF PREDICTIONS.
20-feb-2026
Inglese
CARRATU', MARCO
Università degli Studi di Salerno
File in questo prodotto:
File Dimensione Formato  
ABSTRACT.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 52.77 kB
Formato Adobe PDF
52.77 kB Adobe PDF Visualizza/Apri
tesi.pdf

accesso aperto

Licenza: Tutti i diritti riservati
Dimensione 3.98 MB
Formato Adobe PDF
3.98 MB Adobe PDF Visualizza/Apri

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14242/357626
Il codice NBN di questa tesi è URN:NBN:IT:UNISA-357626