Ichim, Alexandru EugenBouaziz, SofienPauly, Mark2015-07-272015-07-272015-07-27201510.1145/2766974https://infoscience.epfl.ch/handle/20.500.14299/116627WOS:000358786600011We present a complete pipeline for creating fully rigged, personalized 3D facial avatars from hand-held video. Our system faithfully recovers facial expression dynamics of the user by adapting a blendshape template to an image sequence of recorded expressions using an optimization that integrates feature tracking, optical flow, and shape from shading. Fine-scale details such as wrinkles are captured separately in normal maps and ambient occlusion maps. From this user- and expression-specific data, we learn a regressor for on-the-fly detail synthesis during animation to enhance the perceptual realism of the avatars. Our system demonstrates that the use of appropriate reconstruction priors yields compelling face rigs even with a minimalistic acquisition system and limited user assistance. This facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars. We present realtime application demos to validate our method.avatars3dcomputer visioncomputer graphicscamera2dtrackingreconstructionmodelingDynamic 3D Avatar Creation from Hand-held Video Inputtext::conference output::conference proceedings::conference paper