This paper shows how research in computer animation leads to virtual actors who will be able to possess their own animation. A high-level approach consists in specifying the animation in terms of tasks. The animator need only specify the broad outlines of a particular movement and the animation system fills in the details. Examples will be shown in grasping and walking. We also show the need for modeling human behavior, taking into account individualities. We emphasize the impact of behavioral animation by examples of autonomous virtual actors based on synthetic sensors like synthetic vision