Loading...
conference paper not in proceedings
Brief Announcement: Byzantine-Tolerant Machine Learning
2017
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distributed Stochastic Gradient Descent (SGD). Krum guarantees the convergence of SGD even in a distributed setting where (asymptotically) up to half of the workers can be malicious adversaries trying to attack the learning system.
Loading...
Name
Brief_Announcement__Byzantine_Tolerant_Machine_Learning(1).pdf
Type
Publisher's version
Access type
openaccess
Size
532.89 KB
Format
Adobe PDF
Checksum (MD5)
f7d81275599da7eab64c8d2ab623ab24