Multimodal authoring tool for populating a database of emotional reactive animations

We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in the virtual environment to which they react. A large set of pre-recorded animations will be used to obtain such model. We have defined a knowledge-based system to store animations of reflex movements taking into account personality and emotional state. Populating such a database is a complex task. In this paper we describe a multimodal authoring tool that provides a solution to this problem. Our multimodal tool makes use of motion capture equipment, a handheld device and a large projection screen


Published in:
MLMI 2005. Revised Selected Papers Lecture Notes in Computer Science, 3869, 206-217
Presented at:
Second International Workshop Machine Learning for Multimodal Interaction., Edinburgh, UK
Year:
2006
Keywords:
Note:
Virtual Reality Lab., Ecole Polytech. Fed. de Lausanne, Switzerland
Other identifiers:
DAR: 8385
Laboratories:




 Record created 2007-01-16, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)