Estimating Image Depth in the Comics Domain
Estimating the depth of comics images is challenging as such images a) are monocular; b) lack ground-truth depth annotations; c) differ across different artistic styles; d) are sparse and noisy. We thus, use an off-the-shelf unsupervised image to image translation method to translate the comics images to natural ones and then use an attention-guided monocular depth estimator to predict their depth. This lets us leverage the depth annotations of existing natural images to train the depth estimator. Furthermore, our model learns to distinguish between text and images in the comics panels to reduce text-based artefacts in the depth estimates. Our method consistently outperforms the existing state-ofthe-art approaches across all metrics on both the DCM and eBDtheque images. Finally, we introduce a dataset to evaluate depth prediction on comics. Our code and annotated dataset will be made publicly available
WACV_Estimating_Image_Depth_in_the_Comics_Domain-Main.pdf
Postprint
openaccess
MIT License
11.24 MB
Adobe PDF
a3ef55902e80c52b70f345fa66d19ae4
WACV_Estimating_Image_Depth_in_the_Comics_Domain-Supplementary.pdf
Postprint
openaccess
MIT License
3.43 MB
Adobe PDF
cd9229add61867026f93ca8da4eb31dd