We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distributed Stochastic Gradient Descent (SGD). Krum guarantees the convergence of SGD even in a distributed setting where (asymptotically) up to half of the workers can be malicious adversaries trying to attack the learning system.
Title
Brief Announcement: Byzantine-Tolerant Machine Learning
Conference
Principles Of Distributed Computing, Washington D.C, USA, July, 2017
Date
2017
Publisher
ACM
Note
This work has been supported in part by the European ERC Grant 339539 - AOC and by the Swiss National Science Foundation under the grant 200021\_169588 TARBDA (a Theoretical Approach to Robustness in Biological Distributed Algorithms)
Record creation date
2017-06-28