SROBB: Targeted Perceptual Loss for Single Image Super-Resolution

By benefiting from perceptual losses, recent studies have improved significantly the performance of the super-resolution task, where a high-resolution image is resolved from its low-resolution counterpart. Although such objective functions generate near-photorealistic results, their capability is limited, since they estimate the reconstruction error for an entire image in the same way, without considering any semantic information. In this paper, we propose a novel method to benefit from perceptual loss in a more objective way. We optimize a deep network-based decoder with a targeted objective function that penalizes images at different semantic levels using the corresponding terms. In particular, the proposed method leverages our proposed OBB (Object, Background and Boundary) labels, generated from segmentation labels, to estimate a suitable perceptual loss for boundaries, while considering texture similarity for backgrounds. We show that our proposed approach results in more realistic textures and sharper edges, and outperforms other state-of-the-art algorithms in terms of both qualitative results on standard benchmarks and results of extensive user studies.


Publié dans:
The IEEE International Conference on Computer Vision (ICCV), 2710-2719
Présenté à:
International Conference on Computer Vision 2019 (ICCV 2019), Seoul, Korea
Année
2019
Note:
ICCV 2019
Laboratoires:


Note: Le statut de ce fichier est:


 Notice créée le 2019-12-06, modifiée le 2019-12-06

PREPRINT:
Télécharger le document
PDF

Évaluer ce document:

Rate this document:
1
2
3
 
(Pas encore évalué)