Exact Diffusion for Distributed Optimization and Learning-Part I: Algorithm Development
This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the EXTRA strategy. The exact diffusion method is applicable to locally balanced left-stochastic combination matrices which, compared to the conventional doubly stochastic matrix, are more general and able to endow the algorithm with faster convergence rates, more flexible step-size choices, and improved privacy-preserving properties. The derivation of the exact diffusion strategy relies on reformulating the aggregate optimization problem as a penalized problem and resorting to a diagonally weighted incremental construction. Detailed stability and convergence analyses are pursued in Part II of this paper and are facilitated by examining the evolution of the error dynamics in a transformed domain. Numerical simulations illustrate the theoretical conclusions.
WOS:000454285700012
1702.05122
2019-02-01
67
3
708
723
REVIEWED