Abstract

Presents an open testbed for controlling facial animation. The adopted controlling means can act at different levels of abstraction (specification). These means of control can be associated with different interactive devices and media, thereby allowing a greater flexibility and freedom to the animator. The possibility of integration and mixing of control means provides a general platform where a user can experiment with his choice of control method. Experiments with input accessories like the keyboard of a music synthesizer and gestures from a DataGlove are illustrated

Details

Actions