We formulate gradient-based Markov chain Monte Carlo (MCMC) sampling as optimization on the space of probability measures, with Kullback-Leibler (KL) divergence as the objective functional. We show that an under-damped form of the Langevin algorithm performs accelerated gradient descent in this metric. To characterize the convergence of the algorithm, we construct a Lyapunov functional and exploit hypocoercivity of the underdamped Langevin algorithm. As an application, we show that accelerated rates can be obtained for a class of nonconvex functions with the Langevin algorithm.
Type
research article
Web of Science ID
WOS:000649113800019
Author(s)
Date Issued
2021-08-01
Published in
Volume
27
Issue
3
Start page
1942
End page
1992
Editorial or Peer reviewed
REVIEWED
Written at
EPFL
EPFL units
Available on Infoscience
June 19, 2021
Use this identifier to reference this record