Files

Abstract

In this paper, we show that, given video sequences of a moving person acquired with a multi-camera system, we can track joint locations during the movement and recover shape information. We outline techniques for fitting a simplified model to the noisy 3-D data extracted from the images and a new tracking process based on least squares matching is presented. The recovered shape and motion parameters can be used to either reconstruct the original sequence or to allow other animation models to mimic the subject's actions. Our ultimate goal is to automate the process of building complete and realistic animation models of humans, given a set of video sequences

Details

Actions

Preview