Learning to Reconstruct Texture-less Deformable Surfaces from a Single View

Recent years have seen the development of mature solutions for reconstructing deformable surfaces from a single image, provided that they are relatively well-textured. By contrast, recovering the 3D shape of texture-less surfaces remains an open problem, and essentially relates to Shape-from-Shading. In this paper, we introduce a data-driven approach to this problem. We introduce a general framework that can predict diverse 3D representations, such as meshes, normals, and depth maps. Our experiments show that meshes are ill-suited to handle texture-less 3D reconstruction in our context. Furthermore, we demonstrate that our approach generalizes well to unseen objects, and that it yields higher-quality reconstructions than a state-of-the-art SfS technique, particularly in terms of normal estimates. Our reconstructions accurately model the fine details of the surfaces, such as the creases of a T-Shirt worn by a person.


Presented at:
International Conference on 3D Vision, Verona, Italy, September 5-8, 2018
Year:
Mar 23 2018
Keywords:
Dataset(s):
url: https://cvlab.epfl.ch/texless-defsurf-data
Laboratories:




 Record created 2018-08-29, last modified 2019-06-19

PREPRINT:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)