Crowd-based quality assessment of multiview video plus depth coding

Crowdsourcing is becoming a popular cost effective alternative to lab-based evaluations for subjective quality assessment. However, crowd-based evaluations are constrained by the limited availability of display devices used by typical online workers, which makes the evaluation of 3D content a challenging task. In this paper, we investigate two possible approaches to crowd-based quality assessment of multiview video plus depth (MVD) content on 2D displays: by using a virtual view and by using a free-viewpoint video, which corresponds to a smooth camera motion during a time freeze. We conducted the crowdsourcing experiments using seven MVD sequences encoded at different bit rates with the upcoming 3D-AVC video coding standard. The results demonstrate high correlation with subjective evaluations performed using a stereoscopic monitor in a controlled laboratory environment. The analysis shows no statistically significant difference between the two approaches.


Presented at:
IEEE International Conference on Image Processing, Paris, France, October 27-30, 2014
Year:
2014
Keywords:
Laboratories:




 Record created 2014-02-21, last modified 2018-09-13

Preprint:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)