Semantic Virtual Environments with Adaptive Multimodal Interfaces

We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others.


Published in:
11th International Conference on Multimedia Modelling, MMM2005, pages 277-283
Presented at:
The Eleventh International Multi-Media Modelling Conference, Melbourne, Australia, 12-14 January, 2005
Year:
2005
Keywords:
Laboratories:




 Record created 2007-01-16, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)