Context aware, multimodal, and semantic rendering engine

Nowadays, several techniques exist to render digital content such as graphics, audio, haptic, etc. Unfortunately, they require different faculties that cannot always be applied, e.g. providing a picture to a blind person would be useless. In this paper, we present a new multimodal rendering engine with a server web-connected to other devices to perform ubiquitous computing. In order to take advantage of user capabilities, we defined an ontology populated with the following elements: user, device, and information. Our system, with the help of this ontology, aims to select and launch automatically a rendering application. Several test case applications were implemented to render shape, text, and video information via audio, haptic, and sight channels. Validations demonstrate that our system is flexible, easily extensible, and shows promise.


Published in:
Proceedings of the 8th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applciations in Industry (VRCAI '09)
Presented at:
8th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry (VRCAI '09), Yokohama, Japan, December 14-15, 2009
Year:
2009
Keywords:
Laboratories:




 Record created 2010-02-17, last modified 2018-03-17

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)