Unsupervised Rhythm and Voice Conversion to Improve ASR on Dysarthric Speech
Automatic speech recognition (ASR) systems struggle with dysarthric speech due to high inter-speaker variability and slow speaking rates. To address this, we explore dysarthric-to-healthy speech conversion for improved ASR performance. Our approach extends the Rhythm and Voice (RnV) conversion framework by introducing a syllable-based rhythm modeling method suited for dysarthric speech. We assess its impact on ASR by training LF-MMI models and fine-tuning Whisper on converted speech. Experiments on the Torgo corpus reveal that LF-MMI achieves significant word error rate reductions, especially for more severe cases of dysarthria, while fine-tuning Whisper on converted data has minimal effect on its performance. These results highlight the potential of unsupervised rhythm and voice conversion for dysarthric ASR. Code available at: https://github.com/idiap/RnV.
2-s2.0-105020056989
EPFL
Institut Dalle Molle D'intelligence Artificielle Perceptive
Institut Dalle Molle D'intelligence Artificielle Perceptive
Institut Dalle Molle D'intelligence Artificielle Perceptive
2025
2760
2764
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
Rotterdam, Netherlands | 2025-08-17 - 2025-08-21 | ||