Graph-Based Interpolation for Zooming in 3D Scenes

In multiview systems, color plus depth format builds 3D representations of scenes within which the users can freely navigate by changing their viewpoints. In this paper we present a framework for view synthesis when the user requests an arbitrary viewpoint that is closer to the 3D scene than the reference image. On the target image plane, the requested view obtained via depth-image-based-rendering (DIBR) is irregularly structured and has missing information due to the expansion of objects. We propose a novel framework that adopts a graph-based representation of the target view in order to interpolate the missing image pixels under sparsity priors. More specifically, we impose that the target image is reconstructed with a few atoms of a graph-based dictionary. Experimental results show that the reconstructed views have better PSNR and MSSIM quality than the ones generated within the same framework with analytical dictionaries, and are comparable to the ones reconstructed with TV regularization and linear interpolation on graphs. Visual results, however, show that our method better preserves the details and results in fewer disturbing artifacts than the other interpolation methods.

Publié dans:
Proceedings of EUSIPCO
Présenté à:
25th European Signal Processing Conference (EUSIPCO), Kos, Greece, August 28-September 2, 2017

 Notice créée le 2017-10-05, modifiée le 2019-12-05

Lien externe:
Télécharger le document
Évaluer ce document:

Rate this document:
(Pas encore évalué)