233525
20181007231445.0
1941-0476
10.1109/TSP.2017.2757903
doi
000418854700006
ISI
ARTICLE
Coordinate-Descent Diffusion Learning by Networked Agents
Piscataway
2018
Ieee-Inst Electrical Electronics Engineers Inc
2018
16
Journal Articles
This paper examines the mean-square error performance of diffusion stochastic algorithms under a generalized coordinate-descent scheme. In this setting, the adaptation step by each agent is limited to a random subset of the coordinates of its stochastic gradient vector. The selection of coordinates varies randomly from iteration to iteration and from agent to agent across the network. Such schemes are useful in reducing computational complexity at each iteration in power-intensive large data applications. They are also useful in modeling situations where some partial gradient information may be missing at random. Interestingly, the results show that the steady-state performance of the learning strategy is not always degraded, while the convergence rate suffers some degradation. The results provide yet another indication of the resilience and robustness of adaptive distributed strategies.
Coordinate descent
stochastic partial update
computational complexity
diffusion strategies
stochastic gradient algorithms
strongly-convex cost
Wang, Chengcheng
Zhang, Yonggang
Ying, Bicheng
Sayed, Ali H.
283344
251037
352-367
2
IEEE Transactions on Signal Processing
66
URL
https://arxiv.org/abs/1607.01838
ASL
252608
U13470
oai:infoscience.tind.io:233525
article
STI
GLOBAL_SET
144315
EPFL-ARTICLE-233525
wang2016coordinate/ASL
OTHER
PUBLISHED
REVIEWED
ARTICLE