Abstract

Sign language conveys information through multiple channels, such as hand shape, hand movement, and mouthing. Modeling this multi-channel information is a highly challenging problem. In this paper, we elucidate the link between spoken language and sign language in terms of production phenomenon and perception phenomenon. Through this link we show that hidden Markov model-based approaches developed to model "articulatory" features for spoken language processing can be exploited to model the multichannel information inherent in sign language for sign language processing.

Details

Actions