Almost surely constrained convex optimization

We propose a stochastic gradient framework for solving stochastic composite convex optimization problems with (possibly) infinite number of linear inclusion constraints that need to be satisfied almost surely. We use smoothing and homotopy techniques to handle constraints without the need for matrix-valued projections. We show for our stochastic gradient algorithm $\mathcal{O}(\log(k)/\sqrt{k})$ convergence rate for general convex objectives and $\mathcal{O}(\log(k)/k)$ convergence rate for restricted strongly convex objectives. These rates are known to be optimal up to logarithmic factors, even without constraints. We demonstrate the performance of our algorithm with numerical experiments on basis pursuit, a hard margin support vector machines and a portfolio optimization and show that our algorithm achieves state-of-the-art practical performance.


Published in:
Proceedings of the International Conference on Machine Learning - ICML2019
Presented at:
36th International Conference on Machine Learning (ICML 2019), Long Beach, USA, June 9-15, 2019
Year:
2019
Note:
Scheduled publication of Proceedings: Volume 97 is assigned to ICML 2019 (ISSN: 2640-3498)
Laboratories:




 Record created 2019-05-07, last modified 2019-08-12


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)