000267529 001__ 267529
000267529 005__ 20190831134119.0
000267529 037__ $$aCONF
000267529 245__ $$aDecentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
000267529 260__ $$c2019-06-09
000267529 269__ $$a2019-06-09
000267529 336__ $$aConference Papers
000267529 490__ $$aPMLR
000267529 520__ $$aWe consider decentralized stochastic optimization with the objective function (e.g. data samples for machine learning task) being distributed over n machines that can only communicate to their neighbors on a fixed communication graph. To reduce the communication bottleneck, the nodes compress (e.g. quantize or sparsify) their model updates. We cover both unbiased and biased compression operators with quality denoted by \omega <= 1 (\omega=1 meaning no compression). We (i) propose a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate O(1/(nT) + 1/(T \delta^2 \omega)^2) for strongly convex objectives, where T denotes the number of iterations and δ the eigengap of the connectivity matrix. Despite compression quality and network connectivity affecting the higher order terms, the first term in the rate, O(1/(nT)), is the same as for the centralized baseline with exact communication. We (ii) present a novel gossip algorithm, CHOCO-GOSSIP, for the average consensus problem that converges in time O(1/(\delta^2\omega) \log (1/\epsilon)) for accuracy \epsilon > 0. This is (up to our knowledge) the first gossip algorithm that supports arbitrary compressed messages for \omega > 0 and still exhibits linear convergence. We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes.
000267529 542__ $$fCC BY
000267529 6531_ $$aml-ai
000267529 700__ $$0260674$$aKoloskova, Anastasiia$$g289041
000267529 700__ $$0250586$$aStich, Sebastian Urban$$g278401
000267529 700__ $$0250160$$aJaggi, Martin$$g276449
000267529 7112_ $$d9-15 June 2019$$cLong Beach, California, USA$$aICML 2019 - International Conference on Machine Learning
000267529 773__ $$j97$$tICML 2019 - International Conference on Machine Learning
000267529 8560_ $$fmartin.jaggi@epfl.ch
000267529 8564_ $$uhttps://infoscience.epfl.ch/record/267529/files/koloskova19a-supp.pdf$$zFinal$$s4981541
000267529 909C0 $$mmartin.jaggi@epfl.ch$$mjennifer.bachmann-ona@epfl.ch$$0252581$$zGrolimund, Raphael$$xU13319$$pMLO
000267529 909CO $$pconf$$pIC$$ooai:infoscience.epfl.ch:267529
000267529 960__ $$aanastasia.koloskova@epfl.ch
000267529 961__ $$apierre.devaud@epfl.ch
000267529 973__ $$aEPFL$$rREVIEWED
000267529 980__ $$aCONF
000267529 981__ $$aoverwrite