Loading...
conference paper
A Large Deviations Perspective on Policy Gradient Algorithms
2024
Motivated by policy gradient methods in the context of reinforcement learning, we derive the first large deviation rate function for the iterates generated by stochastic gradient descent for possibly non-convex objectives satisfying a Polyak-Łojasiewicz condition. Leveraging the contraction principle from large deviations theory, we illustrate the potential of this result by showing how convergence properties of policy gradient with a softmax parametrization and an entropy regularized objective can be naturally extended to a wide spectrum of other policy parametrizations.
Loading...
Name
jongeneel24a.pdf
Type
main document
Access type
openaccess
License Condition
N/A
Size
299.35 KB
Format
Adobe PDF
Checksum (MD5)
f6863809ee4362052434da7595e131e4