A Large Deviations Perspective on Policy Gradient Algorithms
Motivated by policy gradient methods in the context of reinforcement learning, we derive the first large deviation rate function for the iterates generated by stochastic gradient descent for possibly non-convex objectives satisfying a Polyak-Łojasiewicz condition. Leveraging the contraction principle from large deviations theory, we illustrate the potential of this result by showing how convergence properties of policy gradient with a softmax parametrization and an entropy regularized objective can be naturally extended to a wide spectrum of other policy parametrizations.
jongeneel24a.pdf
Main Document
http://purl.org/coar/version/c_970fb48d4fbd8a85
openaccess
N/A
299.35 KB
Adobe PDF
f6863809ee4362052434da7595e131e4