Files

Abstract

This thesis describes a novel digital background calibration scheme for pipelined ADCs with nonlinear interstage gain. Errors caused by the nonlinear gains are corrected in real-time by adaptively post-processing the digital stage outputs. The goal of this digital error correction is to improve the power efficiency of high-precision analog-to-digital conversion by relaxing the linearity and matching constraints on the analog pipeline stages and compensating the resulting distortion through digital post-processing. This approach is motivated by the observation that technology scaling reduces the energy cost of digital signal processing and at the same time makes high-precision analog signal processing harder because of reduced intrinsic device gain and reduced voltage headroom. In particular, the proposed calibration approach enables the use of power efficient circuits in noise-limited high-resolution, high-speed converters. Alternative stage circuit topologies that are more power efficient than their traditional counterparts are typically too nonlinear and too sensitive to temperature and bias variations to be employed in the critical stages of such converters without adaptive error correction. The proposed calibration scheme removes the effects of nonlinear interstage gain, sub-DAC nonlinearity, and mismatch between reference voltages of different stages. Gain errors and reference voltage mismatch are continuously tracked during normal operation and may thus be time-varying. Sub-DAC nonlinearity is assumed to be constant. A method to characterize the time-invariant non-ideal sub-DAC characteristics during an initial one-time offline calibration phase is proposed. Because the method only uses the existing uncalibrated analog hardware, it can only determine the relative sizes of the DAC error terms. One or two scale factors per sub-DAC remain to be estimated by the adaptation algorithm used to track the time-varying gain parameters. Because the scale factor is constant, it can be excluded from adaptation after its estimate has converged. This offline characterization of sub-DACs ensures that the entire characteristic of all sub-DACs can be estimated, and that calibration of DAC errors can be permanently turned off after initial convergence. Furthermore, it eliminates degrees of freedom in the error correction function, and fixes the gain of the calibrated ADC. The digital postprocessor linearizes the ADC transfer characteristic by applying an adaptive inverse model of the analog signal path to the digital outputs of the pipeline stages. The model uses piecewise linear (PWL) functions to approximate the inverse of the nonlinear stage gains. Previously reported background calibration methods are limited to low-order polynomial gain models. The PWL model is more general than low order polynomial models. The analog signal path can thus be optimized for power efficiency without any constraint on high order distortion. The previously reported split-ADC architecture is used to enable background adaptation of the error correction parameters during normal converter operation and without requiring an accurate reference ADC. The converter to be calibrated is split into two nominally identical channels, both channels processing the same input signal. The average of the outputs of the two channels is used as overall output. The difference of the channel outputs is used as an error signal. The mean-square value of this error signal serves as the performance function that is minimized by the adaptation algorithm. Because two non-ideal ADCs are used as reference channels for each other, precautions are needed to avoid that the adaptation algorithm simply equalizes the transfer characteristics of the two ADCs. The effect of the flexible gain model on these parasitic solutions is analyzed. A previously reported method to eliminate parasitic solutions in the case of linear gains is modified to also work with arbitrary nonlinear gain. A simplified version of the normalized least-mean-squares (NLMS) algorithm is used for parameter adaptation. Normalization assumes that the performance function is quadratic in the parameters, which is almost true because the channel output difference is almost linear in the error correction parameters. Because a low-noise reference signal is used, the LMS loop does not need to filter out noise. The normalization in conjunction with the low-noise reference signal significantly mitigates the convergence speed versus steady-state error trade-off. Heuristic strategies to control the NLMS algorithm are proposed to address identified weaknesses of the basic adaptation algorithm. The main benefits of the heuristic control are faster initial convergence and faster recovery from transient disturbances. Fast initial convergence is achieved by gradually increasing the granularity of the PWL gain models. Fast recovery after fast parameter changes is achieved by selectively reducing the search space for certain samples. A possible architecture for a hardware implementation of the postprocessor is analyzed to demonstrate the practicability of the proposed digital error correction scheme, and to propose detailed architectures for critical blocks. The analysis concludes that using the proposed architecture, the hardware implementation poses no specific difficulty in terms of area, power, or design complexity. A novel approach for adaptive nonlinear digital error correction for pipelined ADCs is proposed. The error correction models amplifier nonlinearity as a general piecewise linear function for maximum flexibility. The algorithm can be implemented using simple arithmetic and a small amount of memory only.

Details

Actions