Files

Résumé

Sign language technology, unlike spoken language technology, is an emerging area of research. Sign language technologies can help in bridging the gap between the Deaf community and the hearing community. One such computer-aided technology is sign language learning technology. To build such a technology, there is a need for sign language technologies that can assess sign production of learners in a linguistically valid manner. Such a technology is yet to emerge. This thesis is a step towards that, where we aim to develop an "explainable" sign language assessment framework. Development of such a framework has some fundamental open research questions: (a) how to effectively model hand movement channel? (b) how to model the multiple channels inherent in sign language? and (c) how to assess sign language at different linguistic levels? The present thesis addresses those open research questions by: (a) development of a hidden Markov model (HMM) based approach that, given only pairwise comparison between signs, derives hand movement subunits that are sharable across sign languages and domains; (b) development of phonology-based approaches, inspired from modeling of articulatory features in speech processing, to model the multichannel information inherent in sign languages in the framework of HMM, and validating it through monolingual, cross-lingual and multilingual sign language recognition studies; and (c) development of a phonology-based sign language assessment approach that can assess in an integrated manner a produced sign at two different levels, namely, lexeme level (i.e., whether the sign production is targeting the correct sign or not) and at form level (i.e. whether the handshape production and the hand movement production is correct or not), and validating it on the linguistically annotated Swiss German Sign Language database SMILE.

Détails

PDF