Rad, Mohammad SaeedBozorgtabar, BehzadMusat, ClaudiuMarti, Urs-ViktorBasler, MaxEkenel, Hazim KemalThiran, Jean-Philippe2019-07-302019-07-302020-07-2010.1016/j.neucom.2019.07.107https://infoscience.epfl.ch/handle/20.500.14299/159456Despite significant progress toward super resolving more realistic images by deeper convolutional neural networks (CNNs), reconstructing fine and natural textures still remains a challenging problem. Recent works on single image super resolution (SISR) are mostly based on optimizing pixel and content wise similarity between recovered and high-resolution (HR) images and do not benefit from recognizability of semantic classes. In this paper, we introduce a novel approach using categorical information to tackle the SISR problem; we present a decoder architecture able to extract and use semantic information to super-resolve a given image by using multitask learning, simultaneously for image super-resolution and semantic segmentation. To explore categorical information during training, the proposed decoder only employs one shared deep network for two task-specific output layers. At run-time only layers resulting HR image are used and no segmentation label is required. Extensive perceptual experiments and a user study on images randomly selected from COCO-Stuff dataset demonstrate the effectiveness of our proposed method and it outperforms the state-of-the-art methods.single image super-resolutionmultitask learningrecovering realistic texturessemantic segmentationgenerative adversarial networkBenefiting from Multitask Learning to Improve Single Image Super-Resolutiontext::journal::journal article::research article