HMM-based Approaches to Model Multichannel Information in Sign Language inspired from Articulatory Features-based Speech Processing

Sign language conveys information through multiple channels, such as hand shape, hand movement, and mouthing. Modeling this multi-channel information is a highly challenging problem. In this paper, we elucidate the link between spoken language and sign language in terms of production phenomenon and perception phenomenon. Through this link we show that hidden Markov model-based approaches developed to model "articulatory" features for spoken language processing can be exploited to model the multichannel information inherent in sign language for sign language processing.


Published in:
2019 IEEE International Conference On Acoustics, Speech And Signal Processing (Icassp), 2817-2821
Presented at:
44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, ENGLAND, May 12-17, 2019
Year:
2019
Publisher:
New York, IEEE
Laboratories:




 Record created 2019-02-25, last modified 2019-09-26

External link:
Download fulltext
Related documents
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)