Bayesian models provide a powerful and flexible framework for learning from data, but their use is often limited by the intractability of exact posterior inference. Traditional approaches such as Markov chain Monte Carlo deliver accurate, uncertainty-aware estimates, but are computationally intensive and often impractical for large datasets or repeated analyses. Variational inference offers an alternative: it transforms the problem of inference into one of optimization, replacing sampling with the search for the best approximation within a tractable family of distributions. While this comes at the cost of some fidelity in uncertainty quantification, it enables scalable algorithms that make rich probabilistic models feasible in practice. In this thesis, we apply these powerful methods in the context of regression with complex models, such as with dynamic quantile regression, and with Gaussian processes to derive new variational approximations. Quantile models in fields such as economics and finance are an important resource to correctly identify and model risk. Financial systems are particularly susceptible to systemic risk, making models that can correctly identify tail dependence highly valuable. In this thesis, we derive both univariate quantile models with latent dynamics and variable selection methods, and multivariate quantile models to jointly model the quantile dynamics and the quantile dependence across series. In many modern complex settings, the modeller is more interested in minimizing a measure of fit than in fitting a generative model to the data. Generalized variational inference defines a valid inference procedure combining an expected loss function and a divergence measure between parameter distributions. We apply this recent inference framework to a flexible class of models, Gaussian processes, which are known for their flexibility and interpretability but suffer from a lack of robustness, to derive posteriors more robust to input outliers, output outliers, and prior misspecification in general.
Advancements on variational inference for quantile regression and Gaussian processes
CARLESI, PIERGIACOMO ANDREA
2026
Abstract
Bayesian models provide a powerful and flexible framework for learning from data, but their use is often limited by the intractability of exact posterior inference. Traditional approaches such as Markov chain Monte Carlo deliver accurate, uncertainty-aware estimates, but are computationally intensive and often impractical for large datasets or repeated analyses. Variational inference offers an alternative: it transforms the problem of inference into one of optimization, replacing sampling with the search for the best approximation within a tractable family of distributions. While this comes at the cost of some fidelity in uncertainty quantification, it enables scalable algorithms that make rich probabilistic models feasible in practice. In this thesis, we apply these powerful methods in the context of regression with complex models, such as with dynamic quantile regression, and with Gaussian processes to derive new variational approximations. Quantile models in fields such as economics and finance are an important resource to correctly identify and model risk. Financial systems are particularly susceptible to systemic risk, making models that can correctly identify tail dependence highly valuable. In this thesis, we derive both univariate quantile models with latent dynamics and variable selection methods, and multivariate quantile models to jointly model the quantile dynamics and the quantile dependence across series. In many modern complex settings, the modeller is more interested in minimizing a measure of fit than in fitting a generative model to the data. Generalized variational inference defines a valid inference procedure combining an expected loss function and a divergence measure between parameter distributions. We apply this recent inference framework to a flexible class of models, Gaussian processes, which are known for their flexibility and interpretability but suffer from a lack of robustness, to derive posteriors more robust to input outliers, output outliers, and prior misspecification in general.| File | Dimensione | Formato | |
|---|---|---|---|
|
Tesi_Carlesi_PiergiacomoAndrea.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
1.35 MB
Formato
Adobe PDF
|
1.35 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/357156
URN:NBN:IT:UNIPD-357156