000234548 001__ 234548
000234548 005__ 20180317093738.0
000234548 037__ $$aSTUDENT
000234548 245__ $$aFully Quantized Distributed Gradient Descent
000234548 260__ $$c2017
000234548 269__ $$a2017
000234548 336__ $$aStudent Projects
000234548 520__ $$an major distributed optimization system, the main bottleneck is often the communication between the different machines. To reduce the time dedicated to communications, some heuristics have been developed to reduce the precision of the messages sent and have been shown to produce good results in practice, and [Alistarh et al, 2017] introduced the quantization framework to analyze theoretically the effects of lossy compression on the convergence rate of gradient descent algorithms.  This works identifies an issue in one of the proofs in [Alistarh et al, 2017] and provides a new approach to reduce the error introduced by low-precision updates.
000234548 6531_ $$aConvex optimization
000234548 6531_ $$aQuantization
000234548 6531_ $$aDistributed optimization
000234548 700__ $$aKünstner, Frederik
000234548 720_2 $$0250160$$aJaggi, Martin$$edir.$$g276449
000234548 720_2 $$aStich, Sebastian Urban$$edir.
000234548 8564_ $$s642579$$uhttps://infoscience.epfl.ch/record/234548/files/final.pdf$$yPreprint$$zPreprint
000234548 909CO $$ooai:infoscience.tind.io:234548$$pIC
000234548 909C0 $$0252581$$pMLO$$xU13319
000234548 917Z8 $$x278401
000234548 937__ $$aEPFL-STUDENT-234548
000234548 973__ $$aEPFL
000234548 980__ $$aSTUDENT$$bSEMESTER