Files

Abstract

Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex $\ell^1$ minimization approach. Here we investigate the related problem of infering the dictionary from training data, with an approach where $\ell^1$-minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the $\ell^1$-minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results.

Details

Actions

Preview