Effective economic policymaking requires timely assessments of current economic conditions rather than relying solely on past-quarter data, especially in a dynamic and open economic system like Singapore. Promptly identifying turning points becomes critical, considering how international fluctuations can swiftly influence domestic conditions. This real-time assessment, often called "nowcasting," is particularly challenging in rapidly changing economic conditions. The econometric approach for GDP growth nowcasting has dominated the forecasting landscape for years. Dynamic Factor Models (DFMs) applications in macroeconomic studies have guided substantial research on reducing high-dimensional datasets into a limited subset of latent factors that evolve over time. Nonetheless, these factors may introduce noise when distilling key signals from a vast pool of predictors, and they may not consistently guarantee the most substantial forecasting capabilities, ultimately affecting model interpretability. Traditional forecasting frameworks often encounter notable difficulties in handling structural breaks and crisis periods, emphasizing the need for flexible methods suited to volatile economic settings. Machine Learning (ML) models offer a highly effective path toward more accurate GDP growth nowcasting. Because these methods can approximate intricate nonlinear relationships, they frequently surpass older econometric techniques. In addition, recent advancements in explainable artificial intelligence (xAI) grant policymakers and researchers a clearer perspective on the internal mechanics of these models, enabling more targeted policy decisions. By integrating sub-period analyses, researchers can capture the heterogeneity of economic shocks, enhance real-time surveillance, and refine policy responses for smaller, highly exposed markets. This research investigates how ML algorithms can improve the accuracy of nowcasting Singapore's GDP growth while clarifying the reasoning behind forecasts. Specifically, we introduce a structured pipeline designed to mitigate look-ahead bias, estimate model reliability via block bootstrap intervals, and highlight key features driving predictions. We also incorporate model combination techniques—such as simple averaging and weighted approaches—to boost forecast stability and monitor the dynamic weights assigned to individual models, thereby offering an interpretive lens for understanding which learners dominate at different points in time. These strategies are complemented by sub-period breakdowns that evaluate performance under varying market phases, revealing a broader perspective on robustness. We analyze a set of approximately seventy variables, encompassing economic fundamentals (such as prices, production, trade, and investment), demographic factors (like population growth and age distribution), and societal indicators (e.g., employment rates). Data from the Singapore Department of Statistics and additional sources, including The Bank of Italy for external reference indices, span from 1990 Q1 to 2023 Q2. We implement penalized linear models (Lasso, Ridge, Elastic Net), dimensionality reduction approaches (Principal Component Regression, Partial Least Squares), ensemble learning methods (Random Forest, XGBoost), and neural architectures (Multilayer Perceptron, Gated Recurrent Unit). Benchmarks such as a naïve Random Walk, an AR(3), and a Dynamic Factor Model are also included for comparison. A Bayesian optimization procedure is employed to select optimal hyperparameters across an expanding-window nested cross-validation framework, accommodating the inherent temporal dependencies of macroeconomic time series. Our findings provide reassurance about the stability and reliability of ML-based models in economic forecasting. These models consistently outperform benchmark forecasts, including official projections by major financial institutions, and demonstrate resilience across crisis episodes. On average, penalized linear models, dimensionality reduction approaches, and neural networks consistently achieve substantial reductions in nowcast errors, typically ranging from 40% to 60% compared to established benchmarks such as Random Walk, AR(3), and Dynamic Factor Models. When these strong learners are combined, forecasts achieve further improvements in nowcast errors, as shown by performance metrics. In addition to GDP growth nowcasting, we employ explainability approaches appropriate to each family model—such as coefficient evolution, Variable Importance in the Projection, Gini-based and permutation-based measures, and Integrated Gradients—to enhance interpretability. By combining these techniques with block bootstrap procedures, we pinpoint which features exert the most significant impact on GDP variations and gauge the inherent uncertainty surrounding them, thereby illustrating, for instance, how changes in foreign trade components or prices can trigger significant shifts in growth projections. Although XAI techniques are gaining traction in fields like image recognition or consumer analytics, they remain relatively unexplored in macroeconomic contexts. This approach enables policymakers to have a deeper understanding of how shocks traverse the economic landscape. Our study contributes to the broader discourse on ML-based macroeconomic forecasting by showing how sub-period analysis, uncertainty quantification, combined models, and xAI can operate in tandem. Through this integration, we achieve more substantial predictive accuracy and foster greater confidence and clarity in the outputs, factors of particular value for small-scale, internationally interconnected economies. Future work could refine our methodology by encompassing high-frequency.
Un’efficace definizione delle politiche economiche richiede valutazioni tempestive delle condizioni congiunturali, anziché affidarsi esclusivamente ai dati del trimestre precedente, soprattutto in economie dinamiche e aperte come quella di Singapore, dove gli shock internazionali si trasmettono rapidamente all’interno. L’identificazione precoce dei punti di svolta diventa quindi cruciale. Questa valutazione in tempo reale, nota come nowcasting, si rivela particolarmente complessa in contesti caratterizzati da rapidi mutamenti economici. Per anni, l’approccio econometrico ha dominato il nowcasting della crescita del PIL. In particolare, l’uso dei Modelli Fattoriali Dinamici (DFM) ha alimentato un’estesa letteratura sulla riduzione della dimensionalità di dataset macroeconomici ampi in un numero ristretto di fattori latenti che evolvono nel tempo. Tuttavia, tali fattori possono introdurre rumore e compromettere l’identificazione dei segnali informativi chiave, limitando sia la capacità predittiva che l’interpretabilità dei modelli. I metodi previsivi tradizionali incontrano spesso difficoltà nel gestire rotture strutturali e periodi di crisi, evidenziando la necessità di tecniche più flessibili, adatte a contesti economici turbolenti. I modelli di Machine Learning (ML) offrono un’alternativa promettente per migliorare le previsioni di breve termine, grazie alla loro capacità di apprendere relazioni non lineari complesse e di adattarsi meglio a condizioni in evoluzione. Inoltre, i recenti progressi nel campo dell’explainable AI (XAI) permettono di interpretare il funzionamento interno di tali modelli, facilitando l’adozione di decisioni di policy più mirate. L’integrazione di analisi per sottoperiodi consente inoltre di catturare l’eterogeneità degli shock economici, rafforzando la sorveglianza congiunturale e ottimizzando la reattività delle politiche pubbliche in economie piccole e ad alta esposizione esterna. Questo studio esplora come gli algoritmi di ML possano migliorare l’accuratezza delle previsioni di crescita del PIL di Singapore, offrendo al contempo maggiore trasparenza sui meccanismi decisionali sottostanti. Presentiamo una pipeline strutturata volta a mitigare il look-ahead bias, stimare l’affidabilità delle previsioni tramite intervalli costruiti con block bootstrap, e identificare le variabili predittive più rilevanti. Applichiamo anche strategie di combinazione dei modelli — come media semplice, media pesata e media pesata esponenziale — per migliorare la stabilità delle previsioni e monitorare l’evoluzione dinamica dei pesi assegnati ai singoli modelli, offrendo una chiave di lettura su quali approcci risultano dominanti nei diversi periodi. L’analisi si basa su circa settanta variabili, tra cui indicatori economici (prezzi, produzione, commercio, investimenti), fattori demografici (crescita della popolazione, distribuzione per età) e variabili sociali (tassi di occupazione). Le fonti includono il Dipartimento di Statistica di Singapore e, per i benchmark esterni, la Banca d’Italia. Il campione copre il periodo dal primo trimestre 1990 al secondo trimestre 2023. I modelli stimati includono regressioni lineari penalizzate (Lasso, Ridge, Elastic Net), metodi di riduzione della dimensionalità (PCA, PLS), tecniche di ensemble learning (Random Forest, XGBoost) e reti neurali (MLP, GRU). Come benchmark utilizziamo anche Random Walk, AR(3) e un Modello Fattoriale Dinamico. L’ottimizzazione degli iperparametri è condotta tramite una procedura bayesiana in un contesto di nested cross-validation a finestra espandibile, rispettando la struttura temporale dei dati. I risultati confermano l’affidabilità dei modelli ML nel nowcasting macroeconomico. Essi superano sistematicamente i benchmark tradizionali, incluse le proiezioni ufficiali di istituzioni finanziarie internazionali, e mostrano resilienza anche nei periodi di crisi. In media, i modelli lineari penalizzati, quelli basati sulla riduzione dimensionale e le reti neurali riducono l’errore di nowcast tra il 40% e il 60% rispetto a Random Walk, AR(3) e DFM. Le combinazioni di modelli consentono ulteriori miglioramenti nelle metriche previsive. Oltre alla previsione, adottiamo tecniche di spiegabilità specifiche per ogni classe di modelli — come l’evoluzione dei coefficienti, l’importanza delle variabili tramite impurezze e permutazioni, e i gradienti integrati — per migliorarne la trasparenza. Integrando queste metodologie con il block bootstrap, identifichiamo le variabili con maggiore impatto sulle dinamiche del PIL e quantifichiamo l’incertezza a esse associata. Ad esempio, variazioni nei flussi commerciali o nei prezzi possono determinare cambiamenti significativi nelle previsioni di crescita. Sebbene l’XAI sia ormai diffusa in ambiti come la visione artificiale o il comportamento dei consumatori, resta ancora poco esplorata nella previsione macroeconomica. Questo lavoro dimostra come la combinazione di spiegabilità, combinazione modellistica, analisi per sottoperiodi e quantificazione dell’incertezza possa migliorare l’accuratezza e la fiducia nelle previsioni, con implicazioni dirette per economie piccole e altamente interconnesse. Sviluppi futuri potrebbero includere l’uso di dati ad alta frequenza e l’integrazione di architetture ML emergenti, al fine di rendere il framework ancora più versatile e robusto.
OPENING THE BLACK BOX: NOWCASTING SINGAPORE’S GDP GROWTH AND ITS EXPLAINABILITY
ATTOLICO, LUCA
2025
Abstract
Effective economic policymaking requires timely assessments of current economic conditions rather than relying solely on past-quarter data, especially in a dynamic and open economic system like Singapore. Promptly identifying turning points becomes critical, considering how international fluctuations can swiftly influence domestic conditions. This real-time assessment, often called "nowcasting," is particularly challenging in rapidly changing economic conditions. The econometric approach for GDP growth nowcasting has dominated the forecasting landscape for years. Dynamic Factor Models (DFMs) applications in macroeconomic studies have guided substantial research on reducing high-dimensional datasets into a limited subset of latent factors that evolve over time. Nonetheless, these factors may introduce noise when distilling key signals from a vast pool of predictors, and they may not consistently guarantee the most substantial forecasting capabilities, ultimately affecting model interpretability. Traditional forecasting frameworks often encounter notable difficulties in handling structural breaks and crisis periods, emphasizing the need for flexible methods suited to volatile economic settings. Machine Learning (ML) models offer a highly effective path toward more accurate GDP growth nowcasting. Because these methods can approximate intricate nonlinear relationships, they frequently surpass older econometric techniques. In addition, recent advancements in explainable artificial intelligence (xAI) grant policymakers and researchers a clearer perspective on the internal mechanics of these models, enabling more targeted policy decisions. By integrating sub-period analyses, researchers can capture the heterogeneity of economic shocks, enhance real-time surveillance, and refine policy responses for smaller, highly exposed markets. This research investigates how ML algorithms can improve the accuracy of nowcasting Singapore's GDP growth while clarifying the reasoning behind forecasts. Specifically, we introduce a structured pipeline designed to mitigate look-ahead bias, estimate model reliability via block bootstrap intervals, and highlight key features driving predictions. We also incorporate model combination techniques—such as simple averaging and weighted approaches—to boost forecast stability and monitor the dynamic weights assigned to individual models, thereby offering an interpretive lens for understanding which learners dominate at different points in time. These strategies are complemented by sub-period breakdowns that evaluate performance under varying market phases, revealing a broader perspective on robustness. We analyze a set of approximately seventy variables, encompassing economic fundamentals (such as prices, production, trade, and investment), demographic factors (like population growth and age distribution), and societal indicators (e.g., employment rates). Data from the Singapore Department of Statistics and additional sources, including The Bank of Italy for external reference indices, span from 1990 Q1 to 2023 Q2. We implement penalized linear models (Lasso, Ridge, Elastic Net), dimensionality reduction approaches (Principal Component Regression, Partial Least Squares), ensemble learning methods (Random Forest, XGBoost), and neural architectures (Multilayer Perceptron, Gated Recurrent Unit). Benchmarks such as a naïve Random Walk, an AR(3), and a Dynamic Factor Model are also included for comparison. A Bayesian optimization procedure is employed to select optimal hyperparameters across an expanding-window nested cross-validation framework, accommodating the inherent temporal dependencies of macroeconomic time series. Our findings provide reassurance about the stability and reliability of ML-based models in economic forecasting. These models consistently outperform benchmark forecasts, including official projections by major financial institutions, and demonstrate resilience across crisis episodes. On average, penalized linear models, dimensionality reduction approaches, and neural networks consistently achieve substantial reductions in nowcast errors, typically ranging from 40% to 60% compared to established benchmarks such as Random Walk, AR(3), and Dynamic Factor Models. When these strong learners are combined, forecasts achieve further improvements in nowcast errors, as shown by performance metrics. In addition to GDP growth nowcasting, we employ explainability approaches appropriate to each family model—such as coefficient evolution, Variable Importance in the Projection, Gini-based and permutation-based measures, and Integrated Gradients—to enhance interpretability. By combining these techniques with block bootstrap procedures, we pinpoint which features exert the most significant impact on GDP variations and gauge the inherent uncertainty surrounding them, thereby illustrating, for instance, how changes in foreign trade components or prices can trigger significant shifts in growth projections. Although XAI techniques are gaining traction in fields like image recognition or consumer analytics, they remain relatively unexplored in macroeconomic contexts. This approach enables policymakers to have a deeper understanding of how shocks traverse the economic landscape. Our study contributes to the broader discourse on ML-based macroeconomic forecasting by showing how sub-period analysis, uncertainty quantification, combined models, and xAI can operate in tandem. Through this integration, we achieve more substantial predictive accuracy and foster greater confidence and clarity in the outputs, factors of particular value for small-scale, internationally interconnected economies. Future work could refine our methodology by encompassing high-frequency.File | Dimensione | Formato | |
---|---|---|---|
Tesi_ATTOLICO_compressed.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
5.42 MB
Formato
Adobe PDF
|
5.42 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/302553
URN:NBN:IT:UNIMC-302553