Built-in Foreground/Background Prior for Weakly-Supervised Semantic Segmentation

Pixel-level annotations are expensive and time consuming to obtain. Hence, weak supervision using only image tags could have a significant impact in semantic segmentation. Recently, CNN-based methods have proposed to fine-tune pre-trained networks using image tags. Without additional information, this leads to poor localization accuracy. This problem, however, was alleviated by making use of objectness priors to generate foreground/background masks. Unfortunately these priors either require training pixel-level annotations/bounding boxes, or still yield inaccurate object boundaries. Here, we propose a novel method to extract markedly more accurate masks from the pre-trained network itself, forgoing external objectness modules. This is accomplished using the activations of the higher-level convolutional layers, smoothed by a dense CRF. We demonstrate that our method, based on these masks and a weakly-supervised loss, outperforms the state-of-the-art tag-based weakly-supervised semantic segmentation techniques. Furthermore, we introduce a new form of inexpensive weak supervision yielding an additional accuracy boost.


Published in:
Computer Vision - Eccv 2016, Pt Viii, 9912, 413-432
Presented at:
European Conference on Computer Vision (ECCV), Amsterdam
Year:
2016
Publisher:
Cham, Springer Int Publishing Ag
ISSN:
0302-9743
ISBN:
978-3-319-46484-8
978-3-319-46483-1
Keywords:
Laboratories:




 Record created 2016-09-05, last modified 2018-09-13

n/a:
SalehEtAlECCV16 - Download fulltextPDF
SalehEtAlECCV16Supp - Download fulltextPDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)