Mou, WenlongFlammarion, NicolasWainwright, Martin J.Bartlett, Peter L.2019-12-022019-12-022019-12-022022-08-0110.3150/21-BEJ1343https://infoscience.epfl.ch/handle/20.500.14299/1635171907.11331We consider minimizing a nonconvex, smooth function $f$ on a Riemannian manifold $\mathcal{M}$. We show that a perturbed version of Riemannian gradient descent algorithm converges to a second-order stationary point (and hence is able to escape saddle points on the manifold). The rate of convergence depends as $1/\epsilon^2$ on the accuracy $\epsilon$, which matches a rate known only for unconstrained smooth minimization. The convergence rate depends polylogarithmically on the manifold dimension $d$, hence is almost dimension-free. The rate also has a polynomial dependence on the parameters describing the curvature of the manifold and the smoothness of the function. While the unconstrained problem (Euclidean setting) is well-studied, our result is the first to prove such a rate for nonconvex, manifold-constrained problems.Improved bounds for discretization of Langevin diffusions: Near-optimal rates without convexitytext::journal::journal article::research article