Abstract

We propose an algorithm to learn from distributed data on a network of arbitrarily connected machines without exchange of the data-points. Parts of the dataset are processed locally at each machine, and then the consensus communication algorithm is employed to consolidate the results. This iterative two stage process converges as if the entire dataset had been on a single machine. The principal contribution of this paper is the proof of convergence of the distributed learning process in the general case that the learning algorithm is a contraction. Moreover, we derive the distributed update equation of a feed-forward neural network with back-propagation for the purpose of verifying the theoretical results. We employ a toy classification example and a real world binary classification dataset. (C) 2013 Elsevier B.V. All rights reserved.

Details

Actions