Lower and upper bounds for approximation of the Kullback-Leibler divergence between Gaussian Mixture Models

Many speech technology systems rely on Gaussian Mixture Models (GMMs). The need for a comparison between two GMMs arises in applications such as speaker verification, model selection or parameter estimation. For this purpose, the Kullback-Leibler (KL) divergence is often used. However, since there is no closed form expression to compute it, it can only be approximated. We propose lower and upper bounds for the KL divergence, which lead to a new approximation and interesting insights into previously proposed approximations. An application to the comparison of speaker models also shows how such approximations can be used to validate assumptions on the models.


Published in:
2012 Ieee International Conference On Acoustics, Speech And Signal Processing (Icassp), 4833-4836
Presented at:
IEEE International Conference on Acoustics, Speech and Signal Processing, Kyoto, Japan, March 25-30, 2012
Year:
2012
Publisher:
New York, Ieee
ISBN:
978-1-4673-0046-9
Keywords:
Laboratories:




 Record created 2012-01-18, last modified 2018-03-18

Preprint:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)