Distributed Value-Function Learning with Linear Convergence Rates

In this paper we develop a fully decentralized algorithm for policy evaluation with off-policy learning and linear function approximation. The proposed algorithm is of the variance reduced kind and achieves linear convergence with O(1) memory requirements. We consider the case where a collection of agents have distinct and fixed size datasets gathered following different behavior policies (none of which is required to explore the full state space) and they all collaborate to evaluate a common target policy. The network approach allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. We provide simulations to illustrate the effectiveness of the method in a Linear Quadratic Regulator (LQR) problem.

Published in:
2019 18Th European Control Conference (Ecc), 505-511
Presented at:
18th European Control Conference (ECC), Naples, ITALY, Jun 25-28, 2019
Jan 01 2019
New York, IEEE

 Record created 2019-10-27, last modified 2019-10-28

Rate this document:

Rate this document:
(Not yet reviewed)