Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Byzantine Fault-Tolerant Distributed Machine Learning with Norm-Based Comparative Gradient Elimination
 
conference paper

Byzantine Fault-Tolerant Distributed Machine Learning with Norm-Based Comparative Gradient Elimination

Gupta, Nirupam  
•
Liu, Shuo
•
Vaidya, Nitin
January 1, 2021
51St Annual Ieee/Ifip International Conference On Dependable Systems And Networks (Dsn-W 2021)
51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)

This paper considers the Byzantine fault-tolerance problem in distributed stochastic gradient descent (D-SGD) method - a popular algorithm for distributed multi-agent machine learning. In this problem, each agent samples data points independently from a certain data-generating distribution. In the fault-free case, the D-SGD method allows all the agents to learn a mathematical model best fitting the data collectively sampled by all agents. We consider the case when a fraction of agents may be Byzantine faulty. Such faulty agents may not follow a prescribed algorithm correctly, and may render traditional D-SGD method ineffective by sharing arbitrary incorrect stochastic gradients. We propose a norm-based gradient-filter, named comparative gradient elimination (CGE), that robustilies the D-SGD method against Byzantine agents. We show that the CGE gradient-filter guarantees fault-tolerance against a bounded fraction of Byzantine agents under standard stochastic assumptions, and is computationally simpler compared to many existing gradient-filters such as multi-KRUM, geometric median-of-means, and the spectral filters. We empirically show, by simulating distributed learning on neural networks, that the fault-tolerance of CGE is comparable to that of existing gradient-filters. We also empirically show that exponential averaging of stochastic gradients improves the fault-tolerance of a generic gradient-filter.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/DSN-W52860.2021.00037
Web of Science ID

WOS:000702266700026

Author(s)
Gupta, Nirupam  
Liu, Shuo
Vaidya, Nitin
Date Issued

2021-01-01

Publisher

IEEE COMPUTER SOC

Publisher place

Los Alamitos

Published in
51St Annual Ieee/Ifip International Conference On Dependable Systems And Networks (Dsn-W 2021)
ISBN of the book

978-1-6654-3950-3

Series title/Series vol.

International Conference on Dependable Systems and Networks Workshops

Start page

175

End page

181

Subjects

Computer Science, Hardware & Architecture

•

Computer Science, Information Systems

•

Computer Science, Theory & Methods

•

Computer Science

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent placeEvent date
51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)

ELECTR NETWORK

Jun 21-24, 2021

Available on Infoscience
October 23, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/182439
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés