Sayed, Ali H.Tu, Sheng-YuanChen, Jianshu2017-12-192017-12-192017-12-19201310.1109/ITA.2013.6502975https://infoscience.epfl.ch/handle/20.500.14299/143314We examine the performance of stochastic-gradient learners over connected networks for global optimization problems involving risk functions that are not necessarily quadratic. We consider two well-studied classes of distributed schemes including consensus strategies and diffusion strategies. We quantify how the mean-square-error and the convergence rate of the network vary with the combination policy and with the fraction of informed agents. Several combination policies are considered including doubly-stochastic rules, the averaging rule, Metropolis rule, and the Hastings rule. It will be seen that the performance of the network does not necessarily improve with a larger proportion of informed agents. A strategy to counter the degradation in performance is presented.Online learning and adaptation over networks: More information is not necessarily bettertext::conference output::conference proceedings::conference paper