Abstract

From first-order incremental Sigma Delta converters to controlled-oscillator-based converters, many ADC architectures are based on the continuous-time integration of the input signal. However, the accuracy of such converters cannot be properly estimated without establishing the impact of noise. In fact, noise is also integrated, resulting in a random error that is added to the measured value. Since drifting phenomena may make simulations and practical measurements unable to ensure long-term reliability of the converters, a theoretical tool is required. This paper presents a solution to compute the standard deviation of the noise-generated error in continuous-time integrator-based ADCs, under the assumption that a previous measure is used to calibrate the system. In addition to produce a realistic case, this assumption allows to handle a theoretical issue that made the problem not properly solvable. The theory is developed, the equations are solved in the cases of pure white noise and pure flicker noise, and the implementation issues implied by the provided formula are addressed.

Details

Actions