Abstract

In this paper we develop a fully decentralized algorithm for policy evaluation with off-policy learning and linear function approximation. The proposed algorithm is of the variance reduced kind and achieves linear convergence with O(1) memory requirements. We consider the case where a collection of agents have distinct and fixed size datasets gathered following different behavior policies (none of which is required to explore the full state space) and they all collaborate to evaluate a common target policy. The network approach allows all agents to converge to the optimal solution even in situations where neither agent can converge on its own without cooperation. We provide simulations to illustrate the effectiveness of the method in a Linear Quadratic Regulator (LQR) problem.

Details

Actions