Magnusson, SindriShokri-Ghadikolaei, HosseinLi, Na2020-11-292020-11-292020-11-292020-01-0110.1109/TSP.2020.3031073https://infoscience.epfl.ch/handle/20.500.14299/173690WOS:000589192000001In distributed optimization and machine learning, multiple nodes coordinate to solve large problems. To do this, the nodes need to compress important algorithm information to bits so that it can be communicated over a digital channel. The communication time of these algorithms follows a complex interplay between a) the algorithm's convergence properties, b) the compression scheme, and c) the transmission rate offered by the digital channel. We explore these relationships for a general class of linearly convergent distributed algorithms. In particular, we illustrate how to design quantizers for these algorithms that compress the communicated information to a few bits while still preserving the linear convergence. Moreover, we characterize the communication time of these algorithms as a function of the available transmission rate. We illustrate our results on learning algorithms using different communication structures, such as decentralized algorithms where a single master coordinates information from many workers and fully distributed algorithms where only neighbours in a communication graph can communicate. We conclude that a co-design of machine learning and communication protocols are mandatory to flourish machine learning over networks.Engineering, Electrical & ElectronicEngineeringsignal processing algorithmsconvergenceoptimizationdistributed algorithmsquantization (signal)machine learning algorithmsprogram processorsmachine learningquantizationcommunicationgamealgorithmscomplexityframeworkOn Maintaining Linear Convergence of Distributed Learning and Optimization Under Limited Communicationtext::journal::journal article::research article