Privacy-Sensitive Audio Features for Speech/Nonspeech Detection
The goal of this paper is to investigate features for speech/nonspeech detection (SND) having ``minimal'' linguistic information from the speech signal. Towards this, we present a comprehensive study of privacy-sensitive features for SND in multiparty conversations. Our study investigates three different approaches to privacy-sensitive features. These approaches are based on: (a) simple, instantaneous feature extraction methods; (b) excitation source information based methods; and (c) feature obfuscation methods such as local (within 130 ms) temporal averaging and randomization applied on excitation source information. To evaluate these approaches for SND, we use multiparty conversational meeting data of nearly 450 hours. On this dataset, we evaluate these features and benchmark them against state-of-the-art spectral shape based features such as Mel-Frequency Perceptual Linear Prediction (MF-PLP). Fusion strategies combining excitation source with simple features show that state-of-the-art performance can be obtained in both close-talking and far-field microphone scenarios. As one way to quantify and evaluate the notion of privacy, we conduct Automatic Speech Recognition (ASR) studies on TIMIT. While excitation source features yield phoneme recognition accuracies in between the simple features and the MF-PLP features, obfuscation methods applied on the excitation features yield low phoneme accuracies in conjunction with state-of-the-art SND performance.
Record created on 2011-07-06, modified on 2016-08-09