The digitization of 3D deformable objects remains a significant challenge in computer graphics and vision, particularly in the accurate modeling of garments. Garments exhibit complex shape variability, non-rigid deformations, and frequent self-occlusion, making them difficult to represent and reconstruct from partial observations such as monocular RGB images or sparse point clouds. This thesis addresses these challenges by introducing novel methods for garment modeling and reconstruction, with a focus on realism, efficiency, and practical usability.
We first introduce differentiable pipelines combining generic implicit functions and data-driven approach to model diverse garment and their interaction with the underlying human body. Specifically, we design neural architectures that encode garment geometries into implicit distance fields and predict geometry-aware deformations conditioned on body shape and pose. This results in pipelines enabling realistic clothing generation, single-layer draping, and tight-fitting reconstruction.
Then, we propose a novel parametric garment representation model that can handle multi-layered clothing. As in sewing patterns widely used by clothing designers, each garment is modeled as a set of individual 2D panels. We use a signed distance field and a label field to represent their 2D shapes and seams respectively. The UV parameterization is learned to map these flat 2D panels to 3D surfaces. We demonstrate that this combination is faster and yields higher quality reconstructions than purely implicit surface representations. It also makes the recovery of layered garments from images possible thanks to its differentiability, and its 2D parameterization enables easy detection of potential collisions. Furthermore, it supports rapid editing of garment shapes and texture by modifying individual 2D panels.
Finally, we extend earlier model to handle garments in any shape, whether worn on the body or manipulated by hands. We combine it with a generative diffusion model to learn rich shape and deformation priors. Rather than collecting expensive real 3D data, we train the model using simulated data as the source for learning shape variations. Leveraging a neural mapping function that connects 2D and 3D representations, we jointly optimize 3D garment meshes and their 2D patterns by matching the learned priors with real observations. The reconstructed garments maintain physical plausibility while capturing fine geometric details, enabling downstream applications including garment retargeting and texture manipulation.
EPFL_TH11370.pdf
Main Document
Not Applicable (or Unknown)
openaccess
N/A
53.25 MB
Adobe PDF
996da49aee57aece035460c3d572b0ab