Multimodal authoring tool for populating a database of emotional reactive animations
We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the virtual environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen
mlmi05.pdf
openaccess
618.08 KB
Adobe PDF
0f3e3e518a74e02783a371818bfd600c