Abstract

Since their inception, interactive virtual environments have not been able to interpret users' gestures. Researchers have investigated a few tentative solutions, but most of them concern a specific set of body parts like hands, arms, or facial expressions. However, when placing a participant in a virtual world to interact with synthetic inhabitants, it would be more convenient and intuitive to use body-oriented actions. To achieve this, we developed a hierarchical model of human actions based on fine-grained primitives. An associated recognition algorithm identifies simultaneous actions on the fly

Details