Speaker verification is a biometric identity verification technique whose performance can be severely degraded by the presence of noise. Using a coherent notation, we reformulate and review several methods which have been proposed to quantify the uncertainty in verification results, some with a view to coping with the effects of mismatched training-testing environments. We also include a recently proposed method, which is firmly rooted in a probabilistic approach and interpretation, and explicitly measures signal quality before assigning a reliability value to the speaker verification classifier’s decision. We evaluate the performance of the confidence and reliability measures over a noisy 251-users database, showing that taking into account signal-domain quality can lead to better accuracy in prediction of classifier errors. We discuss possible strategies for using the measures in a speaker verification system, balancing acquisition duration and verification error rate.