Vlaski, StefanVandenberghe, LievenSayed, Ali H.2022-06-062022-06-062022-06-062022-05-0110.1109/TAC.2021.3081073https://infoscience.epfl.ch/handle/20.500.14299/188305WOS:000794194000017The purpose of this article is to develop and study a decentralized strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution, while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly connected network of agents, and the Pareto optimal solution is sought by appealing to a multiagent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution, and it is shown that the Pareto solution of the approximate smooth problem can be made arbitrarily close to the solution of the original nonsmooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature and, hence, applicable to a broader class of adaptation and learning problems.Automation & Control SystemsEngineering, Electrical & ElectronicEngineeringsmoothing methodsaggregateseigenvalues and eigenfunctionscost functionpareto optimizationlinear matrix inequalitieselectrical engineeringdiffusion strategydistributed optimizationnonsmooth regularizerproximal diffusionproximal operatorregularized diffusionsmoothingleast-mean squaresadaptive networksoptimizationconsensusconvergencealgorithmsRegularized Diffusion Adaptation via Conjugate Smoothingtext::journal::journal article::research article