Deschenaux, Justin SamuelKrawczuk, IgorChrysos, GrigoriosCevher, Volkan2024-11-142024-11-142024-11-142024-05-29https://infoscience.epfl.ch/handle/20.500.14299/2420262405.19201v2Denoising Diffusion Probabilistic Models (DDPMs) exhibit remarkable capabilities in image generation, with studies suggesting that they can generalize by composing latent factors learned from the training data. In this work, we go further and study DDPMs trained on strictly separate subsets of the data distribution with large gaps on the support of the latent factors. We show that such a model can effectively generate images in the unexplored, intermediate regions of the distribution. For instance, when trained on clearly smiling and non-smiling faces, we demonstrate a sampling procedure which can generate slightly smiling faces without reference images (zero-shot interpolation). We replicate these findings for other attributes as well as other datasets. Our code is available at this https URL.enComputer Science - Computer Vision and Pattern RecognitionComputer Science - Artificial IntelligenceComputer Science - Neural and Evolutionary ComputingML-AIGoing beyond Compositions, DDPMs Can Produce Zero-Shot Interpolationstext::objet présenté à une conférence::actes de conférence::article dans une conférence/papier de conférence