Abstract

Human visual system relies on both monocular focusness cues and binocular stereo cues to gain effective 3D perception. Correspondingly, depth from focus/defocus (DfF/DfD) and stereo matching are two most studied passive depth sensing schemes, which are traditionally solved in separate tracks. However, the two techniques are essentially complementary: the monocular cue from DfF/DfD can robustly handle repetitive textures and occlusion that are problematic for stereo matching whereas the binocular cue from stereo matching is insensitive to defocus blurs and can resolve large depth range. In this paper, we emulate human perception and present unified learning-based techniques to conduct hybrid DfF/ DfD and stereo matching. We first construct a comprehensive focal stack dataset synthesized by depth-guided light field rendering. Next, we propose different network architectures to suit various inputs, including focal stack, stereo image pair, binocular focal stack, a focus-defocus image pair and defocus-stereo image triplet. We also exploit different connection methods between the separate net-works for integrating them into an optimized solution to produce high fidelity disparity maps. For exper-iment, we further explore different hardware setup to capture both monocular and binocular depth cues. Results show that our new learning-based hybrid techniques can significantly improve accuracy and robustness in depth estimation. (C) 2020 Elsevier B.V. All rights reserved.

Details

Actions