Statistical lip modelling for visual speech recognition
We describe a speechreading (lipreading) system purely based on visual features extracted from grey level image sequences of the speakers lips. Active shape models are used to track the lip contours while visual speech information is extracted from the shape of the contours. The distribution and temporal dependencies of the shape features are modelled by continuous density Hidden Markov Models. Experiments are reported for speaker independent recognition tests of isolated digits. The analysis of individual feature components suggests that speech relevant information is embedded in a low dimensional space and fairly robust to inter- and intra- speaker variability.
Record created on 2006-03-10, modified on 2016-08-08