A hybrid framework for modeling comprehensible human gesture

We attempt to find suitable modeling for complex gesture patterns so that we can utilize it for recognition and reproduction. As a result, we propose a hybrid framework where the observable characteristics of gestures are modeled explicitly, while pattern details are processed by blind learning methods. Our hybrid approach thus consists of multiple layers with inter-communicating mechanisms between them. They are attribute-level trackers (AT), a gesture-level tracker (GT), and a situation-tracker (ST). AT deals with low-level pattern variations of human action and also has main responsibility for the segmentation of continuous action stream. GT deals with the temporally coordinated characteristics of concurrent attributes and ST keeps track of application-specific context knowledge. The overall model for comprehensible gestures could also serve as a global framework for generating gesture animation


Published in:
Proc.Computational Intelligence for Modelling, Control and Automation. Neural Networks and Advanced Control Strategies. (Concurrent Systems Engineering Series Vol.54)
Presented at:
International Conference on Computational Intelligence for Modelling,CIMCA'99 , Vienna, Austria, 17-19 February, 1999
Year:
1999
Keywords:
Note:
Comput. Graphics Lab., Swiss Federal Inst. of Technol., Lausanne, Switzerland
Laboratories:




 Record created 2007-01-16, last modified 2018-03-17


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)