Adaptive Gradient Descent without Descent
We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don’t increase the stepsize too fast and 2) don’t overstep the local curvature. No need for functional values, no line search, no information about the function except for the gradients. By following these rules, you get a method adaptive to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. Given that the problem is convex, our method converges even if the global smoothness constant is infinity. As an illustration, it can minimize arbitrary continuously twice differentiable convex function. We examine its performance on a range of convex and nonconvex problems, including logistic regression and matrix factorization.
Adaptive Gradient.pdf
Preprint
openaccess
850.25 KB
Adobe PDF
20b6f6c91149a12891417dac467f485a
ad_grad_icml.pdf
Publisher's version
openaccess
8.59 MB
Adobe PDF
8f078870b6feb94571157171d9bde894