Chakrabarti, KushalGupta, NirupamChopra, Nikhil2022-06-062022-06-062022-06-062022-03-0110.1016/j.automatica.2021.110095https://infoscience.epfl.ch/handle/20.500.14299/188310WOS:000794995800007This paper considers the multi-agent linear least-squares problem in a server-agent network architecture. The system comprises multiple agents, each with a set of local data points. The agents are connected to a server, and there is no inter-agent communication. The agents' goal is to compute a linear model that optimally fits the collective data. The agents, however, cannot share their data points. In principle, the agents can solve this problem by collaborating with the server using the server-agent network variant of the classical gradient-descent method. However, when the data points are ill-conditioned, the gradient-descent method requires a large number of iterations to converge. We propose an iterative pre-conditioning technique to mitigate the deleterious impact of the data points' conditioning on the convergence rate of the gradient-descent method. Unlike the conventional preconditioning techniques, the pre-conditioner matrix used in our proposed technique evolves iteratively. We show that our proposed algorithm converges linearly with an improved rate of convergence in comparison to both the classical and the accelerated gradient-descent methods. For the special case, when the solution of the least-squares problem is unique, our algorithm converges to the solution superlinearly. Through numerical experiments on benchmark least-squares problems, we validate our theoretical findings, and also demonstrate our algorithm's improved robustness against process noise. (C) 2021 Elsevier Ltd. All rights reserved.Automation & Control SystemsEngineering, Electrical & ElectronicEngineeringoptimization algorithmsIterative pre-conditioning for expediting the distributed gradient-descent method: The case of linear least-squares problemtext::journal::journal article::research article