Excess-Risk of Distributed Stochastic Learners

This paper studies the learning ability of consensus and diffusion distributed learners from continuous streams of data arising from different but related statistical distributions. Four distinctive features for diffusion learners are revealed in relation to other decentralized schemes even under left-stochastic combination policies. First, closed-form expressions for the evolution of their excess-risk are derived for strongly convex risk functions under a diminishing step-size rule. Second, using these results, it is shown that the diffusion strategy improves the asymptotic convergence rate of the excess-risk relative to non-cooperative schemes. Third, it is shown that when the in-network cooperation rules are designed optimally, the performance of the diffusion implementation can outperform that of naive centralized processing. Finally, the arguments further show that diffusion outperforms consensus strategies asymptotically and that the asymptotic excess-risk expression is invariant to the particular network topology. The framework adopted in this paper studies convergence in the stronger mean-square-error sense, rather than in distribution, and develops tools that enable a close examination of the differences between distributed strategies in terms of asymptotic behavior, as well as in terms of convergence rates.

Published in:
IEEE Transactions on Information Theory, 62, 10, 5753-5785

 Record created 2017-12-19, last modified 2018-12-03

External link:
Download fulltext
Rate this document:

Rate this document:
(Not yet reviewed)