Files

Abstract

Biometric authentication can be cast as a signal processing and statistical pattern recognition problem. As such, it relies on models of signal representations that can be used to discriminate between classes. One of the assumptions typically made by the practioner is that the training set used to learn the parameters of the class-conditional likelihood functions is a representative sample of the unseen test set on which the system will be used. If the test set data is distorted, the assumption no longer holds and the Bayes decision rule or Maximum Likelihood rules are no longer optimal. In biometrics, the distortions of the data come from two main sources: intra-user variability, and changes in acquisition conditions. The aim of the thesis is to increase robustness of biometric verification systems to these sources of variability. Since the signals under consideration are not deterministic, but stochastic, steady-state signal analysis techniques are not adequate for modelling. By using probabilistic methods instead, we can obtain models describing, amongst other, the amount of spread in the random variables, meaning that we can take into account the uncertainty on the realisation of the random variables (features) due to intra-user variability. Furthermore, we posit that modelling information reflecting the acquisition conditions (signal quality measures) should be useful in improving the robustness of biometric verification systems to changes of data from the training conditions. In this thesis, we use probabilistic approaches at all stages of the biometric authentication processing chain, while taking into account the quality of the signal being modelled. We use the theoretical framework of Bayesian networks, a family of graphical models offering important flexibility. We use them both for single-classifier systems (base classifier and reliability model) and for multiple-classifier systems (classifier combination with and without quality measures). In the single-classifier part, we propose to use a Bayesian network topology equivalent to a Gaussian mixture model for signature verification, and show that experimental results are equivalent to state-of-the-art signature verification systems. Furthermore, the model can be used for speaker verification as well. Quality measures are auxiliary information that can be used in both single-classifier systems and multi-classifier systems. We define precisely the concept of quality measure, and show the different potential types of quality measures. We propose new quality measures for both speech and signature, as well as the concept of modality-independent quality measure, as an additional type of auxiliary information. We show that the effect of signal degradation could be different on impostor and client score distributions, an important effect to take into account when designing quality-based fusion models. We propose a principled evaluation methodology for quality measures. The use of reliability models is proposed. They are probabilistic models of single-classifier behaviour, taking into account quality measures. They result in an enhanced confidence measure, which is to some degree robust with respect to changing quality. Experiments show that reliability estimation generally outperforms confidence estimation. We formalise different classifier combination algorithms as probabilistic models in the framework of Bayesian networks for both decision-level and score-level fusion, and propose enhancements to existing models. We also propose a new structure learning algorithm, sparse regression fusion (SRF), specifically designed for classifier combination tasks. The SRF model obtains good results over three multimodal benchmark databases. Lastly, we propose a theoretical view on probabilistic classifier combination with quality measure, based on an analysis of independence and conditional independence relationships induced by different model topologies. We also show the importance of the notion of context-specific independence, and draw a parallel between decision tree building and enforcing a weak version of context-specific independence. Three quality-based fusion schemes are proposed: SRF-Q, an adaptation of the SRF algorithm to the use of quality measures, Context-specific fusion with quality measures (CSF-Q), a fusion model equivalent to a decision tree but motivated by probabilistic and independence arguments, and rigged majority voting, a flexible scheme that can be used with both reliability models and other meta-classifiers, with clear limits on accuracy gains that can be expected. The CSF-Q and the SRF-Q algorithms perform better than state-of-the-art combiners not using quality measures, and under certain conditions better than existing state-of-the-art combiners using quality measures.

Details

Actions

Preview