Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization
 
conference paper

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

Vogels, Thijs  
•
Karinireddy, Sai Praneeth
•
Jaggi, Martin  
January 1, 2019
Advances In Neural Information Processing Systems 32 (NeurIPS 2019)
33rd Conference on Neural Information Processing Systems (NeurIPS)

We study lossy gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization. Despite the significant attention received, current compression schemes either do not scale well, or fail to achieve the target test accuracy. We propose a new low-rank gradient compressor based on power iteration that can i) compress gradients rapidly, ii) efficiently aggregate the compressed gradients using all-reduce, and iii) achieve test performance on par with SGD. The proposed algorithm is the only method evaluated that achieves consistent wall-clock speedups when benchmarked against regular SGD using highly optimized off-the-shelf tools for distributed communication. We demonstrate reduced training times for convolutional networks as well as LSTMs on common datasets. Our code is available at https://github.com/epfml/powersgd.

  • Files
  • Details
  • Metrics
Type
conference paper
Web of Science ID

WOS:000535866905086

Author(s)
Vogels, Thijs  
Karinireddy, Sai Praneeth
Jaggi, Martin  
Date Issued

2019-01-01

Published in
Advances In Neural Information Processing Systems 32 (NeurIPS 2019)
Series title/Series vol.

Advances in Neural Information Processing Systems

Volume

32

Subjects

Computer Science, Artificial Intelligence

•

Computer Science

URL

fulltext

http://papers.nips.cc/paper/9571-powersgd-practical-low-rank-gradient-compression-for-distributed-optimization
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
MLO  
Event nameEvent placeEvent date
33rd Conference on Neural Information Processing Systems (NeurIPS)

Vancouver, Canada

Dec 08-14, 2019

Available on Infoscience
July 10, 2020
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/169957
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés