Yuan, KunYing, BichengLiu, JiagengSayed, Ali H.2019-01-232019-01-232019-01-232019-01-1510.1109/TSP.2018.2872003https://infoscience.epfl.ch/handle/20.500.14299/153979WOS:000452618000006This paper develops a distributed variance-reduced strategy for a collection of interacting agents that are connected by a graph topology. The resulting diffusion-AVRG (where AVRG stands for "amortized variance-reduced gradient") algorithm is shown to have linear convergence to the exact solution, and is more memory efficient than other alternative algorithms. When a batch implementation is employed, it is observed in simulations that diffusion-AVRG is more computationally efficient than exact diffusion or EXTRA, while maintaining almost the same communication efficiency.Engineering, Electrical & ElectronicEngineeringdiffusion strategyvariance-reductionstochastic gradient descentmemory efficiencyavrgmini-batchconvergencealgorithmadmmVariance-Reduced Stochastic Learning by Networked Agents Under Random Reshufflingtext::journal::journal article::research article