Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Linear Bayesian Reinforcement Learning
 
conference paper

Linear Bayesian Reinforcement Learning

Tziortziotis, Nikolaos  
•
Dimitrakakis, Christos  
•
Blekas, Konstantinos
2013
IJCAI '13: Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
23rd international joint conference on artififical intelligence (IJCAI 2013)

This paper proposes a simple linear Bayesian approach to reinforcement learning. We show that with an appropriate basis, a Bayesian linear Gaussian model is sufficient for accurately estimating the system dynamics, and in particular when we allow for correlated noise. Policies are estimated by first sampling a transition model from the current posterior, and then performing approximate dynamic programming on the sampled model. This form of approximate Thompson sampling results in good exploration in unknown environments. The approach can also be seen as a Bayesian generalisation of least-squares policy iteration, where the empirical transition matrix is replaced with a sample from the posterior.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

lbrl_ijcai.pdf

Access type

openaccess

Size

344.04 KB

Format

Adobe PDF

Checksum (MD5)

b77927c7c5e4de3ddb9541c37b288554

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés