Loading...
conference paper
Statistical lip modelling for visual speech recognition
Luettin, Juergen
•
Thacker, Neil A.
•
Beet, Steve W.
1996
Proceedings of the 8th European Signal Processing Conference (Eusipco'96)
We describe a speechreading (lipreading) system purely based on visual features extracted from grey level image sequences of the speakers lips. Active shape models are used to track the lip contours while visual speech information is extracted from the shape of the contours. The distribution and temporal dependencies of the shape features are modelled by continuous density Hidden Markov Models. Experiments are reported for speaker independent recognition tests of isolated digits. The analysis of individual feature components suggests that speech relevant information is embedded in a low dimensional space and fairly robust to inter- and intra- speaker variability.
Type
conference paper
Author(s)
Luettin, Juergen
•
Thacker, Neil A.
•
Beet, Steve W.
Date Issued
1996
Published in
Proceedings of the 8th European Signal Processing Conference (Eusipco'96)
Volume
I
Start page
137
End page
140
Subjects
Written at
EPFL
EPFL units
Available on Infoscience
March 10, 2006
Use this identifier to reference this record