Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. LENA: Communication-Efficient Distributed Learning with Self-Triggered Gradient Uploads
 
conference paper

LENA: Communication-Efficient Distributed Learning with Self-Triggered Gradient Uploads

Ghadikolaei, Hossein S.
•
Stich, Sebastian U.
•
Jaggi, Martin  
January 1, 2021
24Th International Conference On Artificial Intelligence And Statistics (Aistats)
24th International Conference on Artificial Intelligence and Statistics (AISTATS)

In distributed optimization, parameter updates from the gradient computing node devices have to be aggregated in every iteration on the orchestrating server. When these updates are sent over an arbitrary commodity network, bandwidth and latency can be limiting factors. We propose a communication framework where nodes may skip unnecessary uploads. Every node locally accumulates an error vector in memory and self-triggers the upload of the memory contents to the parameter server using a significance filter. The server then uses a history of the nodes' gradients to update the parameter. We characterize the convergence rate of our algorithm in smooth settings (strongly-convex, convex, and nonconvex) and show that it enjoys the same convergence rate as when sending gradients every iteration, with substantially fewer uploads. Numerical experiments on real data indicate a significant reduction of used network resources (total communicated bits and latency), especially in large networks, compared to state-of-the-art algorithms. Our results provide important practical insights for using machine learning over resource-constrained networks, including Internet-of-Things and geo-separated datasets across the globe.

  • Details
  • Metrics
Type
conference paper
Web of Science ID

WOS:000659893804073

Author(s)
Ghadikolaei, Hossein S.
Stich, Sebastian U.
Jaggi, Martin  
Date Issued

2021-01-01

Publisher

MICROTOME PUBLISHING

Publisher place

Brookline

Published in
24Th International Conference On Artificial Intelligence And Statistics (Aistats)
Series title/Series vol.

Proceedings of Machine Learning Research

Volume

130

Subjects

Computer Science, Artificial Intelligence

•

Mathematics, Applied

•

Statistics & Probability

•

Computer Science

•

Mathematics

•

optimization

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
MLO  
Event nameEvent placeEvent date
24th International Conference on Artificial Intelligence and Statistics (AISTATS)

ELECTR NETWORK

Apr 13-15, 2021

Available on Infoscience
August 28, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/180891
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés