Abstract

Color plus depth format allows building 3D representations of scenes within which the users can freely navigate by changing their viewpoints. In this paper we present a framework for view synthesis when the user requests an arbitrary viewpoint that is closer to the 3D scene than the reference image. The requested view constructed via depth-image-based-rendering (DIBR) on the target image plane has missing information due to the expansion of objects and disoccluded areas. Building on our previous work on expansion hole filling, we propose a novel method that adopts a graph-based representation of the target view in order to inpaint the disocclusion holes under sparsity priors. Experimental results indicate that the reconstructed views have PSNR and SSIM quality values that are comparable to those of the state of the art inpainting methods. Visual results show that we are able to preserve details better without introducing blur and reduce artifacts on boundaries between objects on different layers.

Details

Actions