Files

Abstract

The theory of Compressed Sensing (CS) is based on reconstructing sparse signals from random linear measurements. As measurement of continuous signals by digital devices always involves some form of quantization, in practice devices based on CS encoding must be able to accommodate the distortions in the linear measurements created by quantization. In this paper we study the problem of recovering sparse or compressible signals from uniformly quantized measurements. We present a new class of convex optimization programs, or decoders, coined Basis Pursuit DeQuantizer of moment $p$ (BPDQ$_p$), that model the quantization distortion more faithfully than the commonly used Basis Pursuit DeNoise (BPDN) program. Our decoders proceed by minimizing the sparsity of the signal to be reconstructed subject to a particular data-fidelity constraint imposing that the difference between the original and the reproduced measurements has bounded $ell_p$ norm, for $2leq pleq infty$. We show that, in an oversampled situation, i.e. when the ratio between the number of measurements and the sparsity of the signal becomes large, the performance of the BPDQ$_p$ decoders are significantly better than that of BPDN. Indeed, in this case the reconstruction error due to quantization is divided by $sqrt{p+1}$. The condition guaranteeing this reduction relies on a modified Restricted Isometry Property (RIP$_p$) of the sensing matrix bounding the projections of sparse signals in the $ell_p$ norm. Surprisingly, Gaussian random matrices are also RIP$_p$ with high probability. To demonstrate the theoretical power of BPDQ$_p$, we report numerical simulations on signal and image reconstruction problems.

Details

Actions

Preview