The realistic reconstruction of hair motion is challenging because of hair’s complex occlusion, lack of a well-defined surface, and non-Lambertian material. We present a system for passive capture of dynamic hair performances using a set of high-speed video cameras. Our key insight is that, while hair color is unlikely to match across multiple views, the response to oriented filters will. We combine a multi-scale version of this orientation-based matching metric with bilateral aggregation, a MRF-based stereo reconstruction technique, and algorithms for temporal tracking and de-noising. Our final output is a set of hair strands for each frame, grown according to the per-frame reconstructed rough geometry and orientation field. We demonstrate results for a number of hair styles ranging from smooth and ordered to curly and messy.
Dynamic_Hair_Capture.pdf
n/a
openaccess
n/a
10.2 MB
Adobe PDF
bd89b5d752f958ed2264649fd7825f58