Conferring human action recognition skills to life-like agents

Most of today's virtual environments are populated with some kind of autonomous life-like agents. Such agents follow a preprogrammed sequence of behaviours that excludes the user as a participating entity in the virtual society. In order to make inhabited virtual reality an attractive place for information exchange and social interaction, we need to equip the autonomous agents with some perception and interpretation skills. We present one skill: human action recognition. By opposition to human-computer interfaces that focus on speech or hand gestures, we propose a full-body integration of the user. We present a model of human actions along with a real time recognition system. To cover the bilateral aspect in human-computer interfaces, we also discuss some action response issues. In particular, we describe a motion management library that solves animation continuity and mixing problems. Finally, we illustrate our system with two examples and discuss what we have learned

Published in:
Applied Artificial Intelligence Journal, 13, 539-65
Presented at Applied Artificial Intelligence '99, Nagoya, Japan

 Record created 2007-01-16, last modified 2018-03-18

Rate this document:

Rate this document:
(Not yet reviewed)