Fichiers

Résumé

We show that we can effectively fit arbitrarily complex animation models to noisy data extracted from ordinary face images. Our approach is based on least-squares adjustment, using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges, and 2D feature points. In this way, complete head models - including ears and hair - can be acquired with a cheap and entirely passive sensor, such as an ordinary video camera. They can then be fed to existing animation software to produce synthetic sequences.

Détails

Actions

Aperçu