Explains methods to provide autonomous virtual humans with the skills necessary to perform stand-alone roles in films, games and interactive television. We present current research developments in the virtual life of autonomous synthetic actors. After a brief description of our geometric, physical and auditory virtual environments, we introduce perception-action principles with a few simple examples. We emphasize the concept of virtual sensors for virtual humans. In particular, we describe our experiences in implementing virtual sensors such as vision sensors, tactile sensors and hearing sensors. We then describe knowledge-based navigation, knowledge-based locomotion and, in more detail, sensor-based tennis