Abstract

Current virtual reality technologies provide many ways to interact with virtual humans. Most of those techniques, however, are limited to synthetic elements and require cumbersome sensors. We have combined a real-time simulation and rendering platform with a real-time, non-invasive vision-based recognition system to investigate interactions in a mixed environment with real and synthetic elements. In this paper, we present the resulting system, the example of a checkers game between a real person and an autonomous virtual human to demonstrate its performance

Details

Actions