000256124 001__ 256124
000256124 005__ 20190619220038.0
000256124 037__ $$aCONF
000256124 245__ $$aThe Hidden Vulnerability of Distributed Learning in Byzantium
000256124 260__ $$c2018
000256124 269__ $$a2018
000256124 300__ $$a13
000256124 336__ $$aConference Papers
000256124 500__ $$acamera ready version available also on ICML proceedings (open access)
000256124 520__ $$aWhile machine learning is going through an era of celebrated success, concerns have been raised about the vulnerability of its backbone: stochastic gradient descent (SGD). Recent approaches have been proposed to ensure the robustness of distributed SGD against adversarial (Byzantine) workers sending \emph{poisoned} gradients during the training phase. Some of these approaches have been proven \emph{Byzantine--resilient}: they ensure the \emph{convergence} of SGD despite the presence of a minority of adversarial workers. We show in this paper that \emph{convergence is not enough}. In high dimension $d \gg 1$, an adver\-sary can build on the loss function's non--convexity to make SGD converge to \emph{ineffective} models. More precisely, we bring to light that existing Byzantine--resilient schemes leave a \emph{margin of poisoning} of $\bigOmega\left(f(d)\right)$, where $f(d)$ increases at least like $\sqrt{d}$. Based on this \emph{leeway}, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR--10 and MNIST. We introduce \emph{Bulyan}, and prove it significantly reduces the attacker's leeway to a narrow $\bigO\,( \sfrac{1}{\sqrt{d~}})$ bound. We empirically show that Bulyan does not suffer the fragility of existing aggregation rules and, at a reasonable cost in terms of required batch size, achieves convergence \emph{as if} only non--Byzantine gradients had been used to update the model.
000256124 542__ $$fCC BY-NC-SA
000256124 6531_ $$aMachine Learning
000256124 6531_ $$aDistributed Algorithms
000256124 6531_ $$aByzantine fault tolerance
000256124 6531_ $$aRobustness
000256124 6531_ $$astochastic gradient descent
000256124 6531_ $$aSGD
000256124 6531_ $$aPoisoning attack
000256124 6531_ $$aadversarial machine learning
000256124 700__ $$g200613$$aEl Mhamdi, El Mahdi$$0246705
000256124 700__ $$g105326$$aGuerraoui, Rachid$$0240335
000256124 700__ $$0251535$$aRouault, Sébastien Louis Alexandre$$g260806
000256124 7112_ $$dJuly 10-15, 2018$$cStockholm, Sweden$$aInternational Conference on Machine Learning
000256124 8560_ $$felmahdi.elmhamdi@epfl.ch
000256124 8564_ $$uhttps://infoscience.epfl.ch/record/256124/files/bulyan.pdf$$s741229
000256124 909C0 $$xU10407$$pDCL$$mrachid.guerraoui@epfl.ch$$0252114
000256124 909CO $$qGLOBAL_SET$$pconf$$pIC$$ooai:infoscience.epfl.ch:256124
000256124 960__ $$aelmahdi.elmhamdi@epfl.ch
000256124 961__ $$afantin.reichler@epfl.ch
000256124 973__ $$aEPFL$$rREVIEWED
000256124 980__ $$aCONF
000256124 981__ $$aoverwrite