Crowd-based quality assessment of multiview video plus depth coding

Crowdsourcing is becoming a popular cost effective alternative to lab-based evaluations for subjective quality assessment. However, crowd-based evaluations are constrained by the limited availability of display devices used by typical online workers, which makes the evaluation of 3D content a challenging task. In this paper, we investigate two possible approaches to crowd-based quality assessment of multiview video plus depth (MVD) content on 2D displays: by using a virtual view and by using a free-viewpoint video, which corresponds to a smooth camera motion during a time freeze. We conducted the crowdsourcing experiments using seven MVD sequences encoded at different bit rates with the upcoming 3D-AVC video coding standard. The results demonstrate high correlation with subjective evaluations performed using a stereoscopic monitor in a controlled laboratory environment. The analysis shows no statistically significant difference between the two approaches.

Présenté à:
IEEE International Conference on Image Processing, Paris, France, October 27-30, 2014

 Notice créée le 2014-02-21, modifiée le 2019-12-05

Télécharger le document

Évaluer ce document:

Rate this document:
(Pas encore évalué)