Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Updates
 
research article

The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Updates

Stich, Sebastian U.
•
Karimireddy, Sai Praneeth  
January 1, 2020
Journal of Machine Learning Research

We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-convex and non-convex functions and derive concise, non-asymptotic, convergence rates. We show that the rate of convergence in all cases consists of two terms: (i) a stochastic term which is not affected by the delay, and (ii) a higher order deterministic term which is only linearly slowed down by the delay. Thus, in the presence of noise, the effects of the delay become negligible after a few iterations and the algorithm converges at the same optimal rate as standard SGD. This result extends a line of research that showed similar results in the asymptotic regime or for strongly-convex quadratic functions only.

We further show similar results for SGD with more intricate form of delayed gradients-compressed gradients under error compensation and for local SGD where multiple workers perform local steps before communicating with each other. In all of these settings, we improve upon the best known rates.

These results show that SGD is robust to compressed and/or delayed stochastic gradient updates. This is in particular important for distributed parallel implementations, where asynchronous and communication efficient methods are the key to achieve linear speedups for optimization with multiple devices.

  • Details
  • Metrics
Type
research article
Web of Science ID

WOS:000608908400001

Author(s)
Stich, Sebastian U.
Karimireddy, Sai Praneeth  
Date Issued

2020-01-01

Publisher

Microtome Publishing

Published in
Journal of Machine Learning Research
Volume

21

Start page

237

Subjects

Automation & Control Systems

•

Computer Science, Artificial Intelligence

•

Computer Science

•

delayed gradients

•

error-compensation

•

error-feedback

•

gradient compression

•

local sgd

•

machine learning

•

optimization

•

stochastic gradient descent

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
MLO  
Available on Infoscience
March 26, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/176232
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés