Towfic, Zaid J.Sayed, Ali H.2017-12-192017-12-192017-12-19201310.1109/Allerton.2013.6736672https://infoscience.epfl.ch/handle/20.500.14299/143373In this work, we study the task of distributed optimization over a network of learners in which each learner possesses a convex cost function, a set of affine equality constraints, and a set of convex inequality constraints. We propose a distributed diffusion algorithm based on penalty methods that allows the network to cooperatively optimize a global cost function, subject to all constraints and without using projection steps. We show that when sufficiently small step-sizes are employed, the expected distance between the optimal solution vector and that obtained at each node in the network can be made arbitrarily small.Adaptive stochastic convex optimization over networkstext::conference output::conference proceedings::conference paper