Semantic Virtual Environments with Adaptive Multimodal Interfaces
We present a system for real-time configuration of multimodal interfaces to Virtual Environments (VE). The flexibility of our tool is supported by a semantics-based representation of VEs. Semantic descriptors are used to define interaction devices and virtual entities under control. We use portable (XML) descriptors to define the I/O channels of a variety of interaction devices. Semantic description of virtual objects turns them into reactive entities with whom the user can communicate in multiple ways. This article gives details on the semantics-based representation and presents some examples of multimodal interfaces created with our system, including gestures-based and PDA-based interfaces, amongst others.
Gutierrez_Thalmann_Vexo_MMM_05.pdf
openaccess
312.88 KB
Adobe PDF
0cc1493ac2ef262252943fbfc26d7da7