000222751 001__ 222751
000222751 005__ 20190317000547.0
000222751 037__ $$aCONF
000222751 245__ $$aStochastic Three-Composite Convex Minimization
000222751 269__ $$a2016
000222751 260__ $$c2016
000222751 336__ $$aConference Papers
000222751 520__ $$aWe propose a stochastic optimization method for the minimization of the sum of three convex functions, one of which has Lipschitz continuous gradient as well as restricted strong convexity. Our approach is most suitable in the setting where it is computationally advantageous to process smooth term in the decomposition with its stochastic gradient estimate and the other two functions separately with their proximal operators, such as doubly regularized empirical risk minimization problems. We prove the convergence characterization of the proposed algorithm in expectation under the standard assumptions for the stochastic gradient estimate of the smooth term. Our method operates in the primal space and can be considered as a stochastic extension of the three-operator splitting method. Numerical evidence supports the effectiveness of our method in real-world problems.
000222751 6531_ $$astochastic optimization
000222751 6531_ $$aconvex optimization
000222751 6531_ $$aoperator splitting
000222751 6531_ $$acomposite optimization
000222751 700__ $$0248415$$g233086$$aYurtsever, Alp
000222751 700__ $$0249425$$g263340$$aVu, Cong Bang
000222751 700__ $$aCevher, Volkan$$g199128$$0243957
000222751 7112_ $$dDecember 5-10, 2016$$cBarcelona, Spain$$a30th Conference on Neural Information Processing Systems (NIPS2016)
000222751 8564_ $$uhttps://infoscience.epfl.ch/record/222751/files/YVC2016_S3CM_infoscience.pdf$$zn/a$$s2389400$$yn/a
000222751 909C0 $$xU12179$$0252306$$pLIONS
000222751 909CO $$qGLOBAL_SET$$pconf$$ooai:infoscience.tind.io:222751$$pSTI
000222751 917Z8 $$x233086
000222751 917Z8 $$x233086
000222751 917Z8 $$x233086
000222751 937__ $$aEPFL-CONF-222751
000222751 973__ $$rREVIEWED$$sACCEPTED$$aEPFL
000222751 980__ $$aCONF