Abstract

We adopt an innovation-driven framework and investigate the sparse/compressible distributions obtained by linearly measuring or expanding continuous-domain stochastic models. Starting from the first principles, we show that all such distributions are necessarily infinitely divisible. This property is satisfied by many distributions used in statistical learning, such as Gaussian, Laplace, and a wide range of fat-tailed distributions, such as student's-t and alpha-stable laws. However, it excludes some popular distributions used in compressed sensing, such as the Bernoulli-Gaussian distribution and distributions, that decay like exp (-O(vertical bar x vertical bar(p))) for 1 < p < 2. We further explore the implications of infinite divisibility on distributions and conclude that tail decay and unimodality are preserved by all linear functionals of the same continuous-domain process. We explain how these results help in distinguishing suitable variational techniques for statistically solving inverse problems like denoising.

Details

Actions