Towfic, Zaid J.Chen, JianshuSayed, Ali H.2017-12-192017-12-192017-12-19201310.1109/ICASSP.2013.6638696https://infoscience.epfl.ch/handle/20.500.14299/143339We study the distributed inference task over regression and classification models where the likelihood function is strongly log-concave. We show that diffusion strategies allow the KL divergence between two likelihood functions to converge to zero at the rate 1/Ni on average and with high probability, where N is the number of nodes in the network and i is the number of iterations. We derive asymptotic expressions for the expected regularized KL divergence and show that the diffusion strategy can outperform both non-cooperative and conventional centralized strategies, since diffusion implementations can weigh a node's contribution in proportion to its noise level.Distributed inference over regression and classification modelstext::conference output::conference proceedings::conference paper