Virtual Human Control for Reaching Tasks

In the framework of virtual prototyping, a virtual human is a useful tool to evaluate what one can reach or see. However because engineers and industrial designers are not necessarily skilled animators, it is important to provide intuitive or high-level interfaces to control virtual characters. With those considerations in mind, in this thesis we investigate three interfaces to control virtual humans for reaching tasks. First we propose an online full-body control using motion capture technology. The posture is reconstructed with an inverse kinematics solver. Coupled with a collision avoidance module, it is a powerful interface to control interactively virtual humans in cluttered environment. Secondly, we propose a novel interface for the full-body online control of differing-height characters. Instead of scaling the markers to the size of the virtual human, we suggest inversely scaling the virtual world. We conduct a user-study to evaluate if such interface is susceptible to help the user experiences the efforts involved by the differing-height character to complete a reaching task. Finally, we investigate motion graph technology for navigation coupled with reaching motions.


Related material