Abstract

The purpose of this article is to develop and study a decentralized strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution, while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly connected network of agents, and the Pareto optimal solution is sought by appealing to a multiagent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution, and it is shown that the Pareto solution of the approximate smooth problem can be made arbitrarily close to the solution of the original nonsmooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature and, hence, applicable to a broader class of adaptation and learning problems.

Details