Abstract

We present a theoretical analysis and comparison of the effect of $ ℓ _{ 1 } $ versus $ ℓ _{ 2 } $ regularization for the resolution of ill-posed linear inverse and/or compressed sensing problems. Our formulation covers the most general setting where the solution is specified as the minimizer of a convex cost functional. We derive a series of representer theorems that give the generic form of the solution depending on the type of regularization. We start with the analysis of the problem in finite dimensions and then extend our results to the infinite-dimensional spaces $ ℓ _{ 2 } $ (ℤ) and $ ℓ _{ 1 } $ (ℤ). We also consider the use of linear transformations in the form of dictionaries or regularization operators. In particular, we show that the $ ℓ _{ 2 } $ solution is forced to live in a predefined subspace that is intrinsically smooth and tied to the measurement operator. The $ ℓ _{ 1 } $ solution, on the other hand, is formed by adaptively selecting a subset of atoms in a dictionary that is specified by the regularization operator. Beside the proof that $ ℓ _{ 1 } $ solutions are intrinsically sparse, the main outcome of our investigation is that the use of $ ℓ _{ 1 } $ regularization is much more favorable for injecting prior knowledge: it results in a functional form that is independent of the system matrix, while this is not so in the $ ℓ _{ 2 } $ scenario.

Details

Actions