Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Brief Announcement: A Case for Byzantine Machine Learning
 
conference paper

Brief Announcement: A Case for Byzantine Machine Learning

Farhadkhani, Sadegh  
•
Guerraoui, Rachid  
•
Gupta, Nirupam  
Show more
June 17, 2024
PODC '24: Proceedings of the 43rd ACM Symposium on Principles of Distributed Computing
43rd ACM Symposium on Principles of Distributed Computing

The success of machine learning (ML) has been intimately linked with the availability of large amounts of data, typically collected from heterogeneous sources and processed on vast networks of computing devices (also called workers). Beyond accuracy, the use of ML in critical domains such as healthcare and autonomous driving calls for robustness against data poisoning and faulty workers. The problem of Byzantine ML formalizes these robustness issues by considering a distributed ML environment in which workers (storing a portion of the global dataset) can deviate arbitrarily from the prescribed algorithm. Although the problem has attracted a lot of attention from a theoretical point of view, its practical importance for addressing realistic faults (where the behavior of any worker is locally constrained) remains unclear. It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable. We highlight here some important results on the efficacy of Byzantine robustness for tackling data poisoning. In particular, we discuss cases where, while tolerating a wider range of faulty behaviors, Byzantine ML yields solutions that are optimal even under the weaker threat model of data poisoning.

  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3662158.3662802
Author(s)
Farhadkhani, Sadegh  
•
Guerraoui, Rachid  
•
Gupta, Nirupam  
•
Pinot, Rafaël  
Date Issued

2024-06-17

Publisher

ACM

Publisher place

New York, USA

Published in
PODC '24: Proceedings of the 43rd ACM Symposium on Principles of Distributed Computing
ISBN of the book

979-8-4007-0668-4

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent acronymEvent placeEvent date
43rd ACM Symposium on Principles of Distributed Computing

PODC

Nantes, France

2024-06-17 - 2024-06-21

FunderFunding(s)Grant NumberGrant URL

Swiss National Science Foundation

200021-200477

Available on Infoscience
August 26, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/240859
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés