Abstract

In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a distributed diffusion algorithm based on penalty methods that allows the network to cooperatively optimize a global cost function, subject to all constraints and without using projection steps. We show that when sufficiently small step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small.

Details

Actions