Abstract

Decentralized machine learning over peer-to-peer networks is very appealing for it enables to learn personalized models without sharing users data, nor relying on any central server. Peers can improve upon their locally trained model across a network graph of other peers with similar objectives. Whilst they offer an inherently scalable scheme with a very simple cost-efficient learning model, peer-to-peer networks are also fragile. In particular, they can be very easily disrupted by unfairness, free-riding, and adversarial behaviors. In this paper, we present CDPL (Contribution Driven P2P Learning), a novel Byzantine-resilient distributed algorithm to train personalized models across similar peers. We convey theoretically and empirically the effectiveness of CDPL in terms of speed of convergence as well as robustness to Byzantine behavior.

Details

Actions