Greedy dictionary selection for sparse representation

We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satisfies approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e. g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems.

Publié dans:
Ieee Journal Of Selected Topics In Signal Processing, 5, 979-988
Présenté à:
Neural Information Processing Systems (NIPS), Workshop on Discrete Optimization in Machine Learning, Vancouver, Canada, December, 2009

 Notice créée le 2010-09-13, modifiée le 2018-01-28

Lien externe:
Télécharger le document
Évaluer ce document:

Rate this document:
(Pas encore évalué)