The Bayesian framework for machine learning allows the incorporation of prior knowledge into the system in a coherent manner which avoids overfitting problems but rather seeks to approximate the exact posterior and provides a principled basis for the selection of model among alternative models. Unfortunately, the computation required in Bayesian framework is usually intractable. This thesis provides a family of Variational Bayesian (VB) framework which approximates these intractable computations with latent variables by minimizing the Kullback-Leibler divergence between the exact posterior and the approximate distribution. Chapter 1 presents background materials on Bayesian inference, and propagation algorithms. Chapter 2 discusses the family of variational Bayesian theory. It generalizes the expectation maximization (EM) algorithm for learning maximum likelihood parameters. Finally, it discusses factorized approximation of Expectation propagation. Chapter 3 - 5 derive and apply the variants of Variational Bayesian to the family of cluster weighted models (CWMs). It investigates the background history of CWM and proposes new different members into the family. First, the dimensionality of CWM is explored by introducing the t-distributed stochastic neighbor embedding (tSNE) for dimensionality reduction which leads to CMWs based on tSNE for high-dimensional data. Afterwards, we propose a Multinomial CWM for multiclass classification and Zero-inflated Poisson CWM for zero-inflated data. This work derives and applies the Expectation Maximization algorithm with three different maximization step algorithms: Ordinary Least Squares (OLS), Iteratively Reweighted Least Squares (IRLS), and Stochastic Gradient Descent (SGD) to estimate the models' parameters. It finally examines the classification performance of the family of CWM by eight different information criteria and varieties of Adjusted Rand Index (ARI). Chapter 6 proposes a variant of Expectation Propagation: EP-MCMC, EP-ADMM algorithms to the inverse models. It demonstrates EP-MCMC and EP-ADMM on complex Bayesian models for image reconstruction and compares the performance to MCMC. Chapter 7 concludes with a discussion and possible future directions for optimization algorithms.
L’approccio Bayesiano alle tecniche di machine-learning consente di integrare in un modello le informazioni a priori per evitare problemi di overfitting, cercando di approssimare la distribuzione a posteriori. Fornisce inoltre una metodologia coerente per la scelta fra diversi modelli alternativi, e richiede tipicamente uno sforzo computazionale considerevole, tale da rendere alcuni problemi intrattabili. Questa tesi propone una famiglia di metodologie di tipo Variational Bayes per approssimare la complessità computazionale dell’approccio Bayesiano tramite l’utilizzo di variabili latenti, minimizzando la distanza di Kullback-Leibler tra la distribuzione a posteriori esatta e quella approssimata. Il primo capitolo riepiloga i concetti chiave dell’inferenza bayesiana e gli algoritmi di propagazione. Il secondo capitolo introduce il metodo Variational Bayes, il quale generalizza gli algoritmi di Expectation Maximization (EM) per la stima dei parametri tramite un approccio a massima verosimiglianza. Vengono inoltre discusse le approssimazioni fattorizzate per i metodi di Expectation Propagation (EP). Nei capitoli da 3 a 5 vengono derivate e testate diverse varianti dei metodi Variational Bayes per la famiglia dei Cluster Weighted Models (CWMs) e, partendo da un breve cenno storico, vengono proposte diverse nuove classi di CWM. Inizialmente viene analizzato il problema della riduzione di dimensionalità nei CWM, introducendo una nuova classe basata su t-distributed stochastic neighbor embedding (tSNE). Nel secondo lavoro viene proposto un Multinomial CWM per la classificazione multinomiale ed un Zero-inflated Poisson CWM per dati di tipo zero-inflazionato. Vengono derivati ed applicati gli algoritmi EM per la stima dei parametri, considerando tre diverse alternative per il passo di massimizzazione: Minimi Quadrati Ordinari (OLS), Minimi Quadrati Pesati Iterati (IRLS), e Discesa Stocastica del Gradiente (SGD). Per concludere, vengono testate le performance classificative dei modelli CWM utilizzando otto criteri diversi e vari Adjusted Rand Index (ARI). Nel sesto capitolo vengono proposte due varianti del metodo di Expectation Propagation per inverse models denominate EP-MCMC e EP-ADMM, applicandole a modelli bayesiani per image-reconstruction e confrontandone le performance con i metodi MCMC. Il settimo capitolo chiude la tesi, traendo le conclusioni dei lavori svolti e riassumendo i possibili sviluppi futuri.
A Family of Variational Algorithms for Approximate Bayesian Inference of High-Dimensional Data
OLOBATUYI, KEHINDE IBUKUN
2021
Abstract
The Bayesian framework for machine learning allows the incorporation of prior knowledge into the system in a coherent manner which avoids overfitting problems but rather seeks to approximate the exact posterior and provides a principled basis for the selection of model among alternative models. Unfortunately, the computation required in Bayesian framework is usually intractable. This thesis provides a family of Variational Bayesian (VB) framework which approximates these intractable computations with latent variables by minimizing the Kullback-Leibler divergence between the exact posterior and the approximate distribution. Chapter 1 presents background materials on Bayesian inference, and propagation algorithms. Chapter 2 discusses the family of variational Bayesian theory. It generalizes the expectation maximization (EM) algorithm for learning maximum likelihood parameters. Finally, it discusses factorized approximation of Expectation propagation. Chapter 3 - 5 derive and apply the variants of Variational Bayesian to the family of cluster weighted models (CWMs). It investigates the background history of CWM and proposes new different members into the family. First, the dimensionality of CWM is explored by introducing the t-distributed stochastic neighbor embedding (tSNE) for dimensionality reduction which leads to CMWs based on tSNE for high-dimensional data. Afterwards, we propose a Multinomial CWM for multiclass classification and Zero-inflated Poisson CWM for zero-inflated data. This work derives and applies the Expectation Maximization algorithm with three different maximization step algorithms: Ordinary Least Squares (OLS), Iteratively Reweighted Least Squares (IRLS), and Stochastic Gradient Descent (SGD) to estimate the models' parameters. It finally examines the classification performance of the family of CWM by eight different information criteria and varieties of Adjusted Rand Index (ARI). Chapter 6 proposes a variant of Expectation Propagation: EP-MCMC, EP-ADMM algorithms to the inverse models. It demonstrates EP-MCMC and EP-ADMM on complex Bayesian models for image reconstruction and compares the performance to MCMC. Chapter 7 concludes with a discussion and possible future directions for optimization algorithms.File | Dimensione | Formato | |
---|---|---|---|
phd_unimib_827344.pdf
accesso aperto
Dimensione
22.91 MB
Formato
Adobe PDF
|
22.91 MB | Adobe PDF | Visualizza/Apri |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14242/171196
URN:NBN:IT:UNIMIB-171196