Files

Abstract

Accuracy of simple analog-to-digital conversion depends on both resolution of discretization in amplitude and resolution of discretization in time. For implementation convenience, high conversion accuracy is attained by refining the discretization in time using oversampling. It is commonly believed that oversampling adversely impacts rate-distortion properties of the conversion, since the bit rate, B, increases linearly with oversampling, resulting in a slow error decay in the bit rate, on the order of O(1/B). We demonstrate that the information obtained in the process of oversampled analog-to-digital conversion can easily be encoded in a manner which requires only a logarithmic increase of the bit rate with redundancy, achieving an exponential error decay in the bit rate.

Details

Actions

Preview