Fast Hand Gesture Recognition based on Saliency Maps: An Application to Interactive Robotic Marionette Playing

In this paper, we propose a fast algorithm for gesture recognition based on the saliency maps of visual attention. A tuned saliency-based model of visual attention is used to find potential hand regions in video frames. To obtain the overall movement of the hand, saliency maps of the differences of consecutive video frames are overlaid. An improved Characteristic Loci feature extraction method is introduced and used to code obtained hand movement. Finally, the extracted feature vector is used for training SVMs to classify the gestures. The proposed method along a hand-eye coordination model is used to play a robotic marionette and an approval/rejection phase is used to interactively correct the robotic marionette's behavior.

Published in:
Ro-Man 2009: The 18Th Ieee International Symposium On Robot And Human Interactive Communication, Vols 1 And 2, 681-687
Presented at:
18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, JAPAN, Sep 27-Oct 02, 2009
Ieee Service Center, 445 Hoes Lane, Po Box 1331, Piscataway, Nj 08855-1331 Usa

 Record created 2012-03-12, last modified 2018-01-28

Rate this document:

Rate this document:
(Not yet reviewed)