Online Adaptive Methods, Universality and Acceleration

We present a novel method for convex unconstrained optimization that, without any modifications, ensures: (i) accelerated convergence rate for smooth objectives, (ii) standard convergence rate in the general (non-smooth) setting, and (iii) standard convergence rate in the stochastic optimization setting. To the best of our knowledge, this is the first method that simultaneously applies to all of the above settings. At the heart of our method is an adaptive learning rate rule that employs importance weights, in the spirit of adaptive online learning algorithms [12, 20], combined with an update that linearly couples two sequences, in the spirit of [2]. An empirical examination of our method demonstrates its applicability to the above mentioned scenarios and corroborates our theoretical findings.


Published in:
Advances In Neural Information Processing Systems 31 (Nips 2018), 31
Presented at:
32nd Conference on Neural Information Processing Systems (NIPS), Montreal, CANADA, Dec 02-08, 2018
Year:
Jan 01 2018
Publisher:
La Jolla, NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)
ISSN:
1049-5258
Keywords:
Laboratories:




 Record created 2019-06-18, last modified 2020-04-20


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)