IEEEYang, ZhuoqianLi, ShikaiWu, WayneDai, Bo2024-05-012024-05-012024-05-012023-01-0110.1109/ICCV51070.2023.02103https://infoscience.epfl.ch/handle/20.500.14299/207565WOS:001169500507055We present 3DHumanGAN, a 3D-aware generative adversarial network that synthesizes photo-like images of fullbody humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it leverages the strength of 2D GANs to produce high-quality images; ii) it generates consistent images under varying view-angles and poses; iii) the model can incorporate the 3D human prior and enable pose conditioning. Our model is adversarially learned from a collection of web images needless of manual annotation.3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mappingtext::conference output::conference proceedings::conference paper