Object manipulation and grasping for virtual humans

Virtual environments need virtual humans that are capable of interacting with objects around them. Existing virtual environments have not found satisfactory general solutions for this problem. The common method is to combine pre-designed (or motion captured) virtual human animation with simple object animation. While the quality of animation produced for object manipulation by such techniques is usually acceptable, adaptability to changes in the virtual environment is very low, if not non-existent. There is also usually no provision for increased variety in interaction scenarios, meaning that a virtual human manipulates an object always exactly in the same way. The answer may lie in higher-level descriptions of object manipulation. This thesis explores different options available to construct and exploit such descriptions. A primary contribution of this thesis is a manipulation framework that brings together a methodology for defining semantic information and several algorithms for virtual human animation to enable describing manipulation sequences in a procedural manner. Within this framework, we also address how virtual object grasping, an action that is central to object manipulation, can be accomplished. Moving towards an even more high-level specification, we also look at how motion planning techniques can be employed to simply specify start and end configurations and have a manipulation sequence generated from these. Several case studies are provided to illustrate the capabilities and potential of the proposed approaches.


Related material