Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent
 
conference paper not in proceedings

Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent

El Mhamdi, El Mahdi  
•
Guerraoui, Rachid  
•
Rouault, Sébastien Louis Alexandre  
2021
9th International Conference on Learning Representations (ICLR)

Byzantine-resilient Stochastic Gradient Descent (SGD) aims at shielding model training from Byzantine faults, be they ill-labeled training datapoints, exploited software/hardware vulnerabilities, or malicious worker nodes in a distributed setting. Two recent attacks have been challenging state-of-the-art defenses though, often successfully precluding the model from even fitting the training set. The main identified weakness in current defenses is their requirement of a sufficiently low variance-norm ratio for the stochastic gradients. We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness. We assess the effectiveness of our method over 736 different training configurations, comprising the 2 state-of-the-art attacks and 6 defenses. For confidence and reproducibility purposes, each configuration is run 5 times with specified seeds (1 to 5), totalling 3680 runs. In our experiments, when the attack is effective enough to decrease the highest observed top-1 cross-accuracy by at least 20% compared to the unattacked run, our technique systematically increases back the highest observed accuracy, and is able to recover at least 20% in more than 60% of the cases.

  • Files
  • Details
  • Metrics
Type
conference paper not in proceedings
Author(s)
El Mhamdi, El Mahdi  
Guerraoui, Rachid  
Rouault, Sébastien Louis Alexandre  
Date Issued

2021

Total of pages

37

Subjects

Byzantine

•

Machine Learning

•

Stochastic Gradient Descent

•

Distributed Learning

•

Momentum

URL
https://iclr.cc/virtual/2021/papers.html?filter=titles
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent placeEvent date
9th International Conference on Learning Representations (ICLR)

virtual conference

May 4-8, 2021

Available on Infoscience
July 16, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/179932
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés