Brief Announcement: Byzantine-Tolerant Machine Learning

We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distributed Stochastic Gradient Descent (SGD). Krum guarantees the convergence of SGD even in a distributed setting where (asymptotically) up to half of the workers can be malicious adversaries trying to attack the learning system.


Presented at:
Principles Of Distributed Computing, Washington D.C, USA, July, 2017
Year:
2017
Publisher:
ACM
Keywords:
Note:
This work has been supported in part by the European ERC Grant 339539 - AOC and by the Swiss National Science Foundation under the grant 200021\_169588 TARBDA (a Theoretical Approach to Robustness in Biological Distributed Algorithms)
Laboratories:




 Record created 2017-06-28, last modified 2018-09-13

Publisher's version:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)