An Interdisciplinary VR-Architecture for 3D Chatting with Non-verbal Communication

The communication between avatar and agent has already been treated from different but specialized perspectives. In contrast, this paper gives a balanced view of every key architectural aspect: from text analysis to computer graphics, the chatting system and the emotional model. Non-verbal communication, such as facial expression, gaze, or head orientation is crucial to simulate realistic behavior, but is still an aspect neglected in the simulation of virtual societies. In response, this paper aims to present the necessary modularity to allow virtual humans (VH) conversation with consistent facial expression -either between two users through their avatars, between an avatar and an agent, or even between an avatar and a Wizard of Oz. We believe such an approach is particularly suitable for the design and implementation of applications involving VHs interaction in virtual worlds. To this end, three key features are needed to design and implement this system entitled 3D-emoChatting. First, a global architecture that combines components from several research fields. Second, a real-time analysis and management of emotions that allows interactive dialogues with non-verbal communication. Third, a model of a virtual emotional mind called emoMind that allows to simulate individual emotional characteristics. To conclude the paper, we briefly present the basic description of a user-test which is beyond the scope of the present paper.

Published in:
Proceedings of the 17th Eurographics conference on Virtual Environments & Third Joint Virtual Reality, 87-94
Presented at:
17th Eurographics conference on Virtual Environments & Third Joint Virtual Reality (JVRC 2011), Nottingham, UK, September 20-21, 2011
Aire-la-Ville, Switzerland, Eurographics Association

 Record created 2013-01-23, last modified 2019-12-05

Rate this document:

Rate this document:
(Not yet reviewed)