Fua, PascalGuillard, Benoît Alain René2023-11-222023-11-222023-11-22202310.5075/epfl-thesis-10386https://infoscience.epfl.ch/handle/20.500.14299/202336In recent years, there has been a significant revolution in the field of deep learning, which has demonstrated its effectiveness in automatically capturing intricate patterns from large datasets. However, the majority of these successes in Computer Vision have been observed in the domain of 2D images. To extend these achievements to 3D applications, the development of appropriate tools and components is essential. The primary focus of this thesis is to explore the generation of 3D shapes using neural networks. To accomplish this, we propose the development of novel tools and algorithms specifically tailored for this purpose. These tools and algorithms are then applied to address concrete problems, including the reconstruction of 3D surfaces from images or sparse inputs, optimization with respect to physical quantities, and intuitive user editing of the generated shapes. Firstly, we address the problem of reconstructing 3D shapes from 2D input images. To tackle this challenge, we propose a novel hybrid 3D shape representation that combines both voxels and 2D atlases. This representation leverages the benefits of both components: the coarse grid structure enables the principled lifting of 2D features to 3D using backprojection and 3D convolutions, which are well-suited for existing neural network architectures, while the 2D atlases provide the capability to model finer surface details. The resulting reconstruction pipeline learns a shape prior that encompasses entire object categories and achieves state-of-the-art performance on both synthetic and real images. Moreover, this approach naturally extends to the multiview scenario, allowing for robust reconstruction from multiple viewpoints. Then, we introduce a novel approach for parameterizing watertight surfaces using deep implicit shapes. In this method, a deep neural network is employed to regress either a signed distance function or an occupancy field, which is subsequently meshed using readily available techniques. By restoring end-to-end differentiability, we demonstrate the effectiveness of this approach in generating a data-driven mesh parameterization that can dynamically modify its topology and generate smooth surfaces. Serving as a fully differentiable prior, this parameterization enables shape recovery from sparse observations using gradient descent and facilitates shape optimization based on desired physical behaviors. Additionally, we integrate this parameterization into a sketching interface, allowing for shape reconstruction and editing from simple line drawings. This intuitive user experience offers a novel approach to shape design that proves resilient to diverse sketching styles. Finally, we extend the previous approach to handle open surfaces. By proposing an extension of a classical meshing procedure, we are able to reconstruct open surfaces using unsigned distance functions. Once again, we restore end-to-end differentiability, resulting in a robust shape parameterization. This parameterization is used for modeling garments on human bodies, and integrated into a draping pipeline that leverages the efficiency of neural networks. Thanks to its full differentiability, we can seamlessly recover and edit garments based on real observations.en3D deep learningshape reconstructionsurface generationdata driven shape priorsimplicit representationsDeep Learning for 3D Surface Modelling and Reconstructionthesis::doctoral thesis