HMM-based Approaches to Model Multichannel Information in Sign Language inspired from Articulatory Features-based Speech Processing
Sign language conveys information through multiple channels, such as hand shape, hand movement, and mouthing. Modeling this multi-channel information is a highly challenging problem. In this paper, we elucidate the link between spoken language and sign language in terms of production phenomenon and perception phenomenon. Through this link we show that hidden Markov model-based approaches developed to model "articulatory" features for spoken language processing can be exploited to model the multichannel information inherent in sign language for sign language processing.
WOS:000482554003010
2019
New York
2817
2821
Event name | Event place | Event date |
Brighton, ENGLAND | May 12-17, 2019 | |