A coordinator-driven communication reduction scheme for distributed optimization using the projected gradient method
We propose a way to estimate the value function of a convex proximal minimization problem. The scheme constructs a convex set within which the optimizer resides and iteratively refines the set every time that the value function is sampled, namely every time that the proximal minimization problem is solved exactly. The motivation stems from multi-agent distributed optimization problems, where each agent is described by a proximal minimization problem unknown to the global coordinator. We prove convergence results related to the solution of such distributed optimization problems in the special case where the projected gradient method is used and demonstrate that the developed scheme significantly reduces communication requirements when applied to a microgrid setting.
LearnEnvelopeECC18.pdf
Postprint
openaccess
491.4 KB
Adobe PDF
34bf6c25c0171037b4558062fb9601eb