Adaptive stochastic convex optimization over networks

In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a distributed diffusion algorithm based on penalty methods that allows the network to cooperatively optimize a global cost function, subject to all constraints and without using projection steps. We show that when sufficiently small step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small.


Published in:
Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), 1272-1277
Presented at:
51st Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, October 2-4, 2013
Year:
2013
Publisher:
IEEE
Laboratories:




 Record created 2017-12-19, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)