Safe Adaptive Importance Sampling
Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants - using importance values defined by the complete gradient information which changes during optimization - enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the best sampling with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed - in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.
safe_adaptive_importance_sampling_suppl.pdf
Publisher's version
openaccess
1.38 MB
Adobe PDF
481b1549a2a2cd2949a16ac80e5c45c8
poster.pdf
openaccess
692.57 KB
Adobe PDF
38ac9dc9f5a4f1e3c7941578182710f7