Fast Hand Gesture Recognition based on Saliency Maps: An Application to Interactive Robotic Marionette Playing
In this paper, we propose a fast algorithm for gesture recognition based on the saliency maps of visual attention. A tuned saliency-based model of visual attention is used to find potential hand regions in video frames. To obtain the overall movement of the hand, saliency maps of the differences of consecutive video frames are overlaid. An improved Characteristic Loci feature extraction method is introduced and used to code obtained hand movement. Finally, the extracted feature vector is used for training SVMs to classify the gestures. The proposed method along a hand-eye coordination model is used to play a robotic marionette and an approval/rejection phase is used to interactively correct the robotic marionette's behavior.
2009
681
687
NON-REVIEWED
Event name | Event place | Event date |
Toyama, JAPAN | Sep 27-Oct 02, 2009 | |