Fichiers

Résumé

We address the problem of keyword spotting in continuous speech streams when training and testing conditions can be different. We propose a keyword spotting algorithm based on sparse representation of speech signals in a time-frequency feature space. The training speech elements are jointly represented in a common subspace built on simple basis functions. The subspace is trained in order to capture the common time-frequency structures from different occurrences of the keywords to be spotted. The keyword spotting algorithm then employs a sliding window mechanism on speech streams. It computes the contribution of successive speech segments in the subspace of interest and evaluates the similarity with the training data. Experimental results on the TIMIT database show the effectiveness and the noise resilience of the low complexity spotting algorithm.

Détails

Actions

Aperçu