Paschalidou, DespoinaKatharopoulos, AngelosGeiger, AndreasFidler, Sanja2022-02-142022-02-142022-02-142021-01-0110.1109/CVPR46437.2021.00322https://infoscience.epfl.ch/handle/20.500.14299/185420WOS:000739917303040Impressive progress in 3D shape extraction led to representations that can capture object geometries with high fidelity. In parallel, primitive-based methods seek to represent objects as semantically consistent part arrangements. However, due to the simplicity of existing primitive representations, these methods fail to accurately reconstruct 3D shapes using a small number of primitives/parts. We address the trade-off between reconstruction quality and number of parts with Neural Parts, a novel 3D primitive representation that defines primitives using an Invertible Neural Network (INN) which implements homeomorphic mappings between a sphere and the target object. The INN allows us to compute the inverse mapping of the homeomorphism, which in turn, enables the efficient computation of both the implicit surface function of a primitive and its mesh, without any additional post-processing. Our model learns to parse 3D objects into semantically consistent part arrangements without any part-level supervision. Evaluations on ShapeNet, D-FAUST and FreiHAND demonstrate that our primitives can capture complex geometries and thus simultaneously achieve geometrically accurate as well as interpretable reconstructions using an order of magnitude fewer primitives than state-of-the-art shape abstraction methods.Computer Science, Artificial IntelligenceImaging Science & Photographic TechnologyComputer ScienceNeural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networkstext::conference output::conference proceedings::conference paper