Volumetric Transformer Networks

Existing techniques to encode spatial invariance within deep convolutional neural networks (CNNs) apply the same warping field to all the feature channels. This does not account for the fact that the individual feature channels can represent different semantic parts, which can undergo different spatial transformations w.r.t. a canonical configuration. To overcome this limitation, we introduce a learnable module, the volumetric transformer network (VTN), that predicts channel-wise warping fields so as to reconfigure intermediate CNN features spatially and channel-wisely. We design our VTN as an encoder-decoder network, with modules dedicated to letting the information flow across the feature channels, to account for the dependencies between the semantic parts. We further propose a loss function defined between the warped features of pairs of instances, which improves the localization ability of VTN. Our experiments show that VTN consistently boosts the features' representation power and consequently the networks' accuracy on fine-grained image recognition and instance-level image retrieval.

Published in:
[Proceedings of ECCV '20]
Presented at:
The 16th European Conference on Computer Vision (ECCV 2020), Virtual Conference, August 23-28, 2020
Aug 23 2020

Note: The status of this file is: Anyone

 Record created 2020-08-12, last modified 2020-10-29

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)