Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Democratizing Machine Learning
 
conference paper

Democratizing Machine Learning

Boubouh, Karim
•
Boussetta, Amine
•
Gupta, Nirupam  
Show more
January 1, 2022
2022 41St International Symposium On Reliable Distributed Systems (Srds 2022)
41st International Symposium on Reliable Distributed Systems (SRDS)

The increasing prevalence of personal devices motivates the design of algorithms that can leverage their computing power, together with the data they generate, in order to build privacy-preserving and effective machine learning models. However, traditional distributed learning algorithms impose a uniform workload on all participating devices, most often discarding the weakest participants. This not only induces a suboptimal use of available computational resources, but also significantly reduces the quality of the learning process, as data held by the slowest devices is discarded from the procedure.

This paper proposes HGO, a distributed learning scheme with parameterizable iteration costs that can be adjusted to the computational capabilities of different devices. HGO encourages the participation of slower devices, thereby improving the accuracy of the model when the participants do not share the same dataset. When combined with a robust aggregation rule, HGO can tolerate some level of Byzantine behavior, depending on the hardware profile of the devices (we prove, for the first time, a tradeoff between Byzantine tolerance and hardware heterogeneity). We also demonstrate the convergence of HGO, theoretically and empirically, without assuming any specific partitioning of the data over the devices. We present an exhaustive set of experiments, evaluating the performance of HGO on several classification tasks and highlighting the importance of incorporating slow devices when learning in a Byzantine-prone environment with heterogeneous participants.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/SRDS55811.2022.00019
Web of Science ID

WOS:000920405900009

Author(s)
Boubouh, Karim
Boussetta, Amine
Gupta, Nirupam  
Maurer, Alexandre  
Pinot, Rafael  
Date Issued

2022-01-01

Publisher

IEEE

Publisher place

New York

Published in
2022 41St International Symposium On Reliable Distributed Systems (Srds 2022)
ISBN of the book

978-1-6654-9753-4

Series title/Series vol.

Symposium on Reliable Distributed Systems Proceedings

Start page

94

End page

120

Subjects

Computer Science, Hardware & Architecture

•

Computer Science, Theory & Methods

•

Engineering, Electrical & Electronic

•

Computer Science

•

Engineering

•

distributed computing

•

byzantine failures

•

heterogeneity

•

machine learning

•

optimization

•

coordinate descent

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent placeEvent date
41st International Symposium on Reliable Distributed Systems (SRDS)

Vienna, AUSTRIA

Sep 19-22, 2022

Available on Infoscience
March 13, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/195733
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés