Training Distributed Neural Networks by Consensus

We address the problem of learning a classifier from distributed data over a number of arbitrarily connected machines without exchange of the datapoints. Our purpose is to train a neural network at each machine as if the entire dataset was locally available. This is accomplished by taking advantage of the so called consensus algorithm for scalar values distributed over a network. We describe an abstract framework for consensus learning and we derive a distributed version for the multilayer feed forward neural networks with back-propagation and early stopping. Tests are performed and results show that with a careful selection of parameters our method performs like the non-distributed. The trade-off is that the total computational effort over all machines is larger.


Presented at:
Distributed machine learning and sparse representation with massive data sets, Sydney, Australia, 18-20, 2011
Year:
2011
Keywords:
Laboratories:


Note: The status of this file is: Involved Laboratories Only


 Record created 2010-11-29, last modified 2018-03-17

Preprint:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)