000258063 001__ 258063
000258063 005__ 20190619220051.0
000258063 0247_ $$2doi$$a10.1007/s11590-018-1331-1
000258063 037__ $$aARTICLE
000258063 245__ $$aOn the linear convergence of the stochastic gradient method with constant step-size
000258063 260__ $$c2018-09-25
000258063 269__ $$a2018-09-25
000258063 336__ $$aJournal Articles
000258063 520__ $$aThe strong growth condition (SGC) is known to be a sufficient condition for linear convergence of the stochastic gradient method using a constant step-size γ (SGM-CS). In this paper, we provide a necessary condition, for the linear convergence of SGM-CS, that is weaker than SGC. Moreover, when this necessary is violated up to a additive perturbation σ, we show that both the projected stochastic gradient method using a constant step-size, under the restricted strong convexity assumption, and the proximal stochastic gradient method, under the strong convexity assumption, exhibit linear convergence to a noise dominated region, whose distance to the optimal solution is proportional to γσ.
000258063 6531_ $$aStochastic gradient
000258063 6531_ $$aLinear convergence
000258063 6531_ $$aStrong growth condition
000258063 700__ $$g199128$$aCevher, Volkan$$0243957
000258063 700__ $$0249425$$aVu, Cong Bang$$g263340
000258063 773__ $$tOptimization Letters$$j12$$q1-11
000258063 8560_ $$fgosia.baltaian@epfl.ch
000258063 8564_ $$uhttps://infoscience.epfl.ch/record/258063/files/linearSPG.pdf$$s204431
000258063 909C0 $$xU12179$$pLIONS$$mvolkan.cevher@epfl.ch$$0252306
000258063 909CO $$qGLOBAL_SET$$pSTI$$particle$$ooai:infoscience.epfl.ch:258063
000258063 960__ $$agosia.baltaian@epfl.ch
000258063 961__ $$apierre.devaud@epfl.ch
000258063 973__ $$aEPFL$$sPUBLISHED$$rREVIEWED
000258063 980__ $$aARTICLE
000258063 981__ $$aoverwrite