Convex Variational Bayesian Inference for Large Scale Generalized Linear Models

We show how variational Bayesian inference can be implemented for very large binary classification generalized linear models. Our relaxation is shown to be a convex problem for any log-concave model, and we provide an efficient double loop algorithm for solving it. Scalability is attained by decoupling the criterion, so that most of the work can be done by solving large linear systems. We employ our method for Bayesian active learning on large binary classification tasks and provide an algorithm to efficiently update our posterior representation when new observations are sequentially included.


Published in:
Proceedings of the 26th International Conference on Machine Learning, 761-768
Presented at:
International Conference on Machine Learning 26, Montreal, Canada
Year:
2009
Keywords:
Laboratories:




 Record created 2010-12-01, last modified 2018-03-17

n/a:
Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)