Byzantine tolerant gradient descent for distributed machine learning with adversaries

The present application concerns a computer-implemented method for training a machine learning model in a distributed fashion, using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environment and comprises performing a learning round, comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, receiving an estimate update vector (gradient) from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector, and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors. The method aggregates the gradients while guaranteeing resilience to up to half workers being compromised (malfunctioning, erroneous or modified by attackers).


Year:
2019
Keywords:
Note:
Alternative title(s) : (fr) Descente de gradient tolérant les byzantines pour apprentissage machine distribué avec des adversaires
Other identifiers:
EPO Family ID: 60484385
Patent number(s):
Laboratories:




 Record created 2019-12-05, last modified 2020-10-25


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)