El Helou, MajedZhou, RuofanSchmutz, FrankGuibert, FabriceSüsstrunk, Sabine2020-04-142020-04-142020-04-142020-04-1410.1109/ICASSP40776.2020.9053136https://infoscience.epfl.ch/handle/20.500.14299/168165WOS:000615970409107Extreme image or video completion, where, for instance, we only retain 1% of pixels in random locations, allows for very cheap sampling in terms of the required pre-processing. The consequence is, however, a reconstruction that is challenging for humans and inpainting algorithms alike. We propose an extension of a state-of-the-art extreme image completion algorithm to extreme video completion. We analyze a color-motion estimation approach based on color KL-divergence that is suitable for extremely sparse scenarios. Our algorithm leverages the estimate to adapt between its spatial and temporal filtering when reconstructing the sparse randomly-sampled video. We validate our results on 50 publicly-available videos using reconstruction PSNR and mean opinion scores.extreme completionsparse color motionextreme compressionvideo inpaintingimagecomplexityDivergence-Based Adaptive Extreme Video Completiontext::conference output::conference proceedings::conference paper