Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation
 
research article

Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation

Tomar, Devavrat  
•
Lortkipanidze, Manana  
•
Vray, Guillaume  
Show more
2021
IEEE Transactions on Medical Imaging

Despite the successes of deep neural networks on many challenging vision tasks, they often fail to generalize to new test domains that are not distributed identically to the training data. The domain adaptation becomes more challenging for cross-modality medical data with a notable domain shift. Given that specific annotated imaging modalities may not be accessible nor complete. Our proposed solution is based on the cross-modality synthesis of medical images to reduce the costly annotation burden by radiologists and bridge the domain gap in radiological images. We present a novel approach for image-to-image translation in medical images, capable of supervised or unsupervised (unpaired image data) setups. Built upon adversarial training, we propose a learnable self-attentive spatial normalization of the deep convolutional generator network’s intermediate activations. Unlike previous attention-based image-to-image translation approaches, which are either domain-specific or require distortion of the source domain’s structures, we unearth the importance of the auxiliary semantic information to handle the geometric changes and preserve anatomical structures during image translation. We achieve superior results for cross-modality segmentation between unpaired MRI and CT data for multi-modality whole heart and multi-modal brain tumor MRI (T1/T2) datasets compared to the state-of-the-art methods. We also observe encouraging results in cross-modality conversion for paired MRI and CT images on a brain dataset. Furthermore, a detailed analysis of the cross-modality image translation, thorough ablation studies confirm our proposed method’s efficacy.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TMI.2021.3059265
PubMed ID

33577450

Author(s)
Tomar, Devavrat  
Lortkipanidze, Manana  
Vray, Guillaume  
Bozorgtabar, Behzad  
Thiran, Jean-Philippe  
Date Issued

2021

Published in
IEEE Transactions on Medical Imaging
Volume

40

Issue

10

Start page

2926

End page

2938

Subjects

Domain adaptation

•

image synthesis

•

generative adversarial networks

•

unpaired domains

•

self-attention

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LTS5  
Available on Infoscience
August 2, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/199601
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés