In this paper, we present current research developments in the area of autonomous virtual actors. We first define formal requirements for true virtual humans. After a brief description of the perception action principles with a few simple examples, we emphasize the concept of virtual sensors for virtual humans. In particular, we describe in details our experiences in implementing virtual vision, tactile and audition. We then describe perception-based locomotion, a multisensor based method of automatic grasping and vision-based ball games. We also discuss problems of integrating autonomous humans into virtual environments. Finally, a description of our AGENT library is presented in appendix