Towfic, Zaid J.Chen, JianshuSayed, Ali H.2017-12-192017-12-192017-12-19201210.1109/MLSP.2012.6349778https://infoscience.epfl.ch/handle/20.500.14299/143311We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distributed diffusion strategy, which relies only on local interactions, is able to achieve the same convergence rate as centralized strategies that have access to all data from the nodes at every iteration. We also show that every learner is able to improve its excess-risk in comparison to the non-cooperative mode of operation where each learner would operate independently of the other learners.On the generalization ability of distributed online learnerstext::conference output::conference proceedings::conference paper