Safe Adaptive Importance Sampling

Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants - using importance values defined by the complete gradient information which changes during optimization - enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the best sampling with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed - in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.


Published in:
Advances in Neural Information Processing Systems 30 (NIPS 2017), 30
Presented at:
Neural Information Processing Systems (NIPS), Long Beach, USA, December 4-9, 2017
Year:
2017
Keywords:
Laboratories:




 Record created 2017-11-22, last modified 2019-08-31

Poster:
Download fulltextPDF
Publisher's version:
Download fulltextPDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)