Alghunaim, Sulaiman A.Sayed, Ali H.2020-06-042020-06-042020-06-042020-07-0110.1016/j.automatica.2020.109003https://infoscience.epfl.ch/handle/20.500.14299/169096WOS:000534593100045In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks. (C) 2020 Elsevier Ltd. All rights reserved.Automation & Control SystemsEngineering, Electrical & ElectronicEngineeringprimal-dual methodslinear convergencearrow-hurwiczaugmented lagrangiandistributed optimizationsaddle-point problemsconvex-optimizationcoordinationstabilityLinear convergence of primal-dual gradient methods and their performance in distributed optimizationtext::journal::journal article::research article