Alacaoglu, AhmetFercoq, OlivierCevher, Volkan2022-07-042022-07-042022-07-042022-01-0110.1137/19M1296252https://infoscience.epfl.ch/handle/20.500.14299/188956WOS:000809754200033In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution with convexity and linear convergence with further structure, using standard step sizes independent of strong convexity or other regularity constants. In the general convex case, we also prove the \scrO (1/k) convergence rate for the ergodic sequence, on expected primal-dual gap function. Our assumption for linear convergence is metric subregularity, which is satisfied for strongly convex-strongly concave problems in addition to many nonsmooth and/or nonstrongly convex problems, such as linear programs, Lasso, and support vector machines. We also provide numerical evidence showing that SPDHG with standard step sizes shows a competitive practical performance against its specialized strongly convex variant SPDHG-\mu and other state-of-the-art algorithms, including variance reduction methods.Mathematics, AppliedMathematicsconvex optimizationcoordinate descentprimal-dual algorithmsalternating direction methodfixed-point iterationsconvex-optimizationlinear convergencealgorithmsframeworkOn The Convergence Of Stochastic Primal-Dual Hybrid Gradienttext::journal::journal article::research article