Gaussian Covariance and Scalable Variational Inference

We analyze computational aspects of variational approximate inference techniques for sparse linear models, which have to be understood to allow for large scale applications. Gaussian covariances play a key role, whose approximation is computationally hard. While most previous methods gain scalability by not even representing most posterior dependencies, harmful factorization assumptions can be avoided by employing data-dependent low-rank approximations instead. We provide theoretical and empirical insights into algorithmic and statistical consequences of low-rank covariance approximation errors on decision outcomes in nonlinear sequential Bayesian experimental design.


Editor(s):
Fuernkranz, J.
Joachims, T.
Published in:
Proceedings of the 27th International Conference on Machine Learning
Presented at:
International Conference on Machine Learning 27, Haifa, Israel
Year:
2010
Publisher:
Omnipress
Keywords:
Laboratories:




 Record created 2010-12-01, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)