Jongeneel, WouterKuhn, DanielLi, Mengmeng2024-08-262024-08-262024-08-242024https://infoscience.epfl.ch/handle/20.500.14299/240841Motivated by policy gradient methods in the context of reinforcement learning, we derive the first large deviation rate function for the iterates generated by stochastic gradient descent for possibly non-convex objectives satisfying a Polyak-Łojasiewicz condition. Leveraging the contraction principle from large deviations theory, we illustrate the potential of this result by showing how convergence properties of policy gradient with a softmax parametrization and an entropy regularized objective can be naturally extended to a wide spectrum of other policy parametrizations.enPolicy gradient algorithmsPolyak-Łojasiewicz conditionLarge deviations theoryA Large Deviations Perspective on Policy Gradient Algorithmstext::conference output::conference proceedings::conference paper