Vocal Tract Length Normalization for Statistical Parametric Speech Synthesis
Vocal tract length normalization (VTLN) has been successfully used in automatic speech recognition for improved performance. The same technique can be implemented in statistical parametric speech synthesis for rapid speaker adaptation during synthesis. This paper presents an efficient implementation of VTLN using expectation maximization and addresses the key challenges faced in implementing VTLN for synthesis. Jacobian normalization, high dimensionality features and truncation of the transformation matrix are a few challenges presented with the appropriate solutions. Detailed evaluations are performed to estimate the most suitable technique for using VTLN in speech synthesis. Evaluating VTLN in the framework of speech synthesis is also not an easy task since the technique does not work equally well for all speakers. Speakers have been selected based on different objective and subjective criteria to demonstrate the difference between systems. The best method for implementing VTLN is confirmed to be use of the lower order features for estimating warping factors.
Keywords: Expectation-maximization optimization ; hidden Markov model (HMM)-based statistical parametric speech synthesis ; speaker adaptation ; vocal tract length normalization ; Linear Transformation ; Speaker Adaptation ; Recognition
Record created on 2013-12-19, modified on 2016-08-09