Conference paper

Distributed learning via Diffusion adaptation with application to ensemble learning.

We examine the problem of learning a set of parameters from a distributed dataset. We assume the datasets are collected by agents over a distributed ad-hoc network, and that the communication of the actual raw data is prohibitive due to either privacy constraints or communication constraints. We propose a distributed algorithm for online learning that is proved to guarantee a bounded excess risk and the bound can be made arbitrary small for sufficiently small step-sizes. We apply our framework to the expert advice problem where nodes learn the weights for the trained experts distributively.


    • EPFL-CONF-233414

    Record created on 2017-12-19, modified on 2018-01-17


  • There is no available fulltext. Please contact the lab or the authors.

Related material