Coordinate Descent with Bandit Sampling

Coordinate descent methods usually minimize a cost function by updating a random decision variable (corresponding to one coordinate) at a time. Ideally, we would update the decision variable that yields the largest decrease in the cost function. However, finding this coordinate would require checking all of them, which would effectively negate the improvement in computational tractability that coordinate descent is intended to afford. To address this, we propose a new adaptive method for selecting a coordinate. First, we find a lower bound on the amount the cost function decreases when a coordinate is updated. We then use a multi-armed bandit algorithm to learn which coordinates result in the largest lower bound by interleaving this learning with conventional coordinate descent updates except that the coordinate is selected proportionately to the expected decrease. We show that our approach improves the convergence of coordinate descent methods both theoretically and experimentally.


Published in:
Advances In Neural Information Processing Systems 31 (Nips 2018), 31
Presented at:
32nd Conference on Neural Information Processing Systems (NIPS), Montreal, CANADA, Dec 02-08, 2018
Year:
Jan 01 2018
Publisher:
La Jolla, NEURAL INFORMATION PROCESSING SYSTEMS (NIPS)
ISSN:
1049-5258
Laboratories:




 Record created 2019-06-18, last modified 2020-04-21


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)