Saleh, FatemehsadatAli Akbarian, M. SadeghSalzmann, MathieuPetersson, LarsGould, StephenAlvarez, Jose M.2016-09-052016-09-052016-09-05201610.1007/978-3-319-46484-8_25https://infoscience.epfl.ch/handle/20.500.14299/129087WOS:000389500600025Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground/background masks. Unfortunately these priors either require training pixel-level annotations/bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract markedly more accurate masks from the pre-trained network itself, forgoing external objectness modules. This is accomplished using the activations of the higher-level convolutional layers, smoothed by a dense CRF. We demonstrate that our method, based on these masks and a weakly-supervised loss, outperforms the state-of-the-art tag-based weakly-supervised semantic segmentation techniques. Furthermore, we introduce a new form of inexpensive weak supervision yielding an additional accuracy boost.Semantic segmentationWeak annotationConvolutional neural networksWeakly-supervised segmentationBuilt-in Foreground/Background Prior for Weakly-Supervised Semantic Segmentationtext::conference output::conference proceedings::conference paper