Scalable Collaborative Bayesian Preference Learning

Learning about users’ utilities from preference, discrete choice or implicit feedback data is of integral importance in e-commerce, targeted advertising and web search. Due to the sparsity and diffuse nature of data, Bayesian approaches hold much promise, yet most prior work does not scale up to realistic data sizes. We shed light on why inference for such settings is computationally difficult for standard machine learning methods, most of which focus on predicting explicit ratings only. To simplify the difficulty, we present a novel expectation maximization algorithm, driven by expectation propagation approximate inference, which scales to very large datasets without requiring strong factorization assumptions. Our utility model uses both latent bilinear collaborative filtering and non-parametric Gaussian process (GP) regression. In experiments on large real-world datasets, our method gives substantially better results than either matrix factorization or GPs in isolation, and converges significantly faster.


Editor(s):
Kaski, Samuel
Corander, Jukka
Published in:
Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 33, 475-483
Presented at:
17th International Conference on Artificial Intelligence and Statistics, Reykjavik, Iceland, April 22-25, 2014
Year:
2014
Keywords:
Laboratories:




 Record created 2014-02-12, last modified 2018-03-17

Fulltext:
Download fulltextPDF
Publisher's version:
Download fulltextPDF
External link:
Download fulltextURL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)