Graph-Based Interpolation for Zooming in 3D Scenes

In multiview systems, color plus depth format builds 3D representations of scenes within which the users can freely navigate by changing their viewpoints. In this paper we present a framework for view synthesis when the user requests an arbitrary viewpoint that is closer to the 3D scene than the reference image. On the target image plane, the requested view obtained via depth-image-based-rendering (DIBR) is irregularly structured and has missing information due to the expansion of objects. We propose a novel framework that adopts a graph-based representation of the target view in order to interpolate the missing image pixels under sparsity priors. More specifically, we impose that the target image is reconstructed with a few atoms of a graph-based dictionary. Experimental results show that the reconstructed views have better PSNR and MSSIM quality than the ones generated within the same framework with analytical dictionaries, and are comparable to the ones reconstructed with TV regularization and linear interpolation on graphs. Visual results, however, show that our method better preserves the details and results in fewer disturbing artifacts than the other interpolation methods.

Published in:
Proceedings of EUSIPCO
Presented at:
25th European Signal Processing Conference (EUSIPCO), Kos, Greece, August 28-September 2, 2017

 Record created 2017-10-05, last modified 2019-08-12

External link:
Download fulltext
Rate this document:

Rate this document:
(Not yet reviewed)