Dictionary learning over large distributed models via dual-ADMM strategies

We consider the problem of dictionary learning over large scale models, where the model parameters are distributed over a multi-agent network. We demonstrate that the dual optimization problem for inference is better conditioned than the primal problem and that the dual cost function is an aggregate of individual costs associated with different network agents. We also establish that the dual cost function is smooth, strongly-convex, and possesses Lipschitz continuous gradients. These properties allow us to formulate efficient distributed ADMM algorithms for the dual inference problem. In particular, we show that the proximal operators utilized in the ADMM algorithm can be characterized in closed-form with linear complexity for certain useful dictionary learning scenarios.


Published in:
Proceedings of the International Workshop on Machine Learning for Signal Processing (MLSP), 1-6
Presented at:
24th International Workshop on Machine Learning for Signal Processing (MLSP), Reims, France, September 21-24, 2014
Year:
2014
Publisher:
IEEE
Laboratories:




 Record created 2017-12-19, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)