Cover Tree Bayesian Reinforcement Learning

This paper proposes an online tree-based Bayesian approach for reinforcement learning. For inference, we employ a generalised context tree model. This defines a distribution on multivariate Gaussian piecewise-linear models, which can be updated in closed form. The tree structure itself is constructed using the cover tree method, which remains efficient in high dimensional spaces. We combine the model with Thompson sampling and approximate dynamic programming to obtain effective exploration policies in unknown environments. The flexibility and computational simplicity of the model render it suitable for many reinforcement learning problems in continuous state spaces. We demonstrate this in an experimental comparison with least squares policy iteration.


Presented at:
International Joint Conference on Artificial Intelligence, IJCAI 2013
Year:
2013
Publisher:
Arxiv
Laboratories:




 Record created 2013-12-08, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)