Loading...
2017
Brief Announcement: Byzantine-Tolerant Machine Learning
conference paper not in proceedings
We report on \emph{Krum}, the first \emph{provably} Byzantine-tolerant aggregation rule for distributed Stochastic Gradient Descent (SGD). Krum guarantees the convergence of SGD even in a distributed setting where (asymptotically) up to half of the workers can be malicious adversaries trying to attack the learning system.
Loading...
Name
Brief_Announcement__Byzantine_Tolerant_Machine_Learning(1).pdf
Type
Publisher's Version
Access type
openaccess
Size
532.89 KB
Format
Adobe PDF
Checksum (MD5)
f7d81275599da7eab64c8d2ab623ab24