Learning to Reconstruct Texture-less Deformable Surfaces from a Single View

Recent years have seen the development of mature solutions for reconstructing deformable surfaces from a single image, provided that they are relatively well-textured. By contrast, recovering the 3D shape of texture-less surfaces remains an open problem, and essentially relates to Shapefrom-Shading. In this paper, we introduce a data-driven approach to this problem. We introduce a general framework that can predict diverse 3D representations, such as meshes, normals, and depth maps. Our experiments show that meshes are ill-suited to handle texture-less 3D reconstruction in our context. Furthermore, we demonstrate that our approach generalizes well to unseen objects, and that it yields higher-quality reconstructions than a state-of-theart SfS technique, particularly in terms of normal estimates. Our reconstructions accurately model the fine details of the surfaces, such as the creases of a T-Shirt worn by a person.

Published in:
2018 International Conference On 3D Vision (3Dv), 606-615
Presented at:
6th International Conference on 3D Vision (3DV), Verona, ITALY, Sep 05-08, 2018
Jan 01 2018
New York, IEEE

 Record created 2018-12-13, last modified 2019-12-05

Rate this document:

Rate this document:
(Not yet reviewed)