000138905 001__ 138905
000138905 005__ 20180913055324.0
000138905 037__ $$aREP_WORK
000138905 245__ $$aVirtual View Generation with a Hybrid Camera Array
000138905 269__ $$a2009
000138905 260__ $$c2009
000138905 336__ $$aReports
000138905 520__ $$aVirtual view synthesis from an array of cameras has been an essential element of three-dimensional video broadcasting/conferencing. In this paper, we propose a scheme based on a hybrid camera array consisting of four regular video cameras and one time-of-flight depth camera. During rendering, we use the depth image from the depth camera as initialization, and compute a view-dependent scene geometry using constrained plane sweeping from the regular cameras. View-dependent texture mapping is then deployed to render the scene at the desired virtual viewpoint. Experimental results show that the addition of the time-of-flight depth camera greatly improves the rendering quality compared with an array of regular cameras with similar sparsity. In the application of 3D video boardcasting/conferencing, our hybrid camera system demonstrates great potential in reducing the amount of data for compression/streaming while maintaining high rendering quality.
000138905 6531_ $$aStereo vision
000138905 6531_ $$aView synthesis
000138905 6531_ $$aSensor fusion
000138905 6531_ $$aDepth camera
000138905 6531_ $$aMulti-camera array 
000138905 6531_ $$aHybrid camera array
000138905 700__ $$0242709$$aTola, Engin$$g170333
000138905 700__ $$aZhang, Cha
000138905 700__ $$aCai, Qin
000138905 700__ $$aZhang, Zhengyou
000138905 8564_ $$zURL
000138905 8564_ $$s9694429$$uhttps://infoscience.epfl.ch/record/138905/files/vvghybrid.pdf$$zn/a
000138905 909C0 $$0252087$$pCVLAB$$xU10659
000138905 909CO $$ooai:infoscience.tind.io:138905$$pIC$$preport
000138905 937__ $$aCVLAB-REPORT-2009-001
000138905 973__ $$aEPFL$$sPUBLISHED
000138905 980__ $$aREPORT