Ying, BichengSayed, Ali H.2017-12-192017-12-192017-12-19201610.1109/ICASSP.2016.7472610https://infoscience.epfl.ch/handle/20.500.14299/143418This work examines the performance of stochastic sub-gradient learning strategies, for both cases of stand-alone and networked agents, under weaker conditions than usually considered in the literature. It is shown that these conditions are automatically satisfied by several important cases of interest, including support-vector machines and sparsity-inducing learning solutions. The analysis establishes that sub-gradient strategies can attain exponential convergence rates, as opposed to sub-linear rates, and that they can approach the optimal solution within O(p), for sufficiently small step-sizes, p. A realizable exponential-weighting procedure is proposed to smooth the intermediate iterates and to guarantee these desirable performance properties.Performance limits of single-agent and multi-agent sub-gradient stochastic learningtext::conference output::conference proceedings::conference paper