Abstract

We consider the problem of distributed estimation, where a set of nodes are required to collectively estimate some parameter of interest. We motivate and propose new versions of the diffusion LMS algorithm, including a version that outperforms previous solutions without increasing the complexity or communications, and others that obtain even better performance by allowing additional communications. We analyze their performance and compare with simulation results.

Details

Actions