Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Exploring Causal Information Bottleneck for Adversarial Defense
 
research article

Exploring Causal Information Bottleneck for Adversarial Defense

Yan, Jun
•
Hua, Huan
•
Huang, Weiquan
Show more
2025
IEEE Transactions on Information Forensics and Security

Information bottleneck (IB) is a promising defense solution against adversarial attacks on deep neural networks. However, these methods often suffer from spurious correlations. A correlation exists between the prediction and the non-robust features, yet it does not reflect the causal relationship well. Such spurious correlations induce the neural networks to learn fragile and incomprehensible (non-robust) features. This issue limits its potential for further improving adversarial robustness. This paper addresses this issue by incorporating causal inference into the IB-based defense framework. Specifically, we propose a novel defense method that use the instrumental variables to enhance the adversarial robustness. Our proposed method divides the features into two parts for causal effect estimation: robust and non-robust features. The robust features relate to understanding semantic information, and the non-robust features link to the vulnerable style information. By employing this framework, the IB method can mitigate the influence of non-robust features and extract the robust features linking to the semantic information of objects. We conduct a thorough analysis of the effectiveness of our proposed method. Notably, the experiments on MNIST, FashionMNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that our method significantly boosts the adversarial robustness against multiple adversarial attacks compared to previous methods. Our regularization method can improve adversarial robustness in both natural and adversarial training frameworks. Besides, CausalIB can be applied to both Convolutional Neural Networks and Vision Transformers as a plug-and-play module.

  • Details
  • Metrics
Type
research article
DOI
10.1109/TIFS.2025.3611108
Scopus ID

2-s2.0-105016832036

Author(s)
Yan, Jun

Tongji University

Hua, Huan

Tongji University

Huang, Weiquan

Tongji University

Fang, Xi

DP Technology Co.Ltd

Ge, Wancheng

Tongji University

Yang, Jiancheng  

École Polytechnique Fédérale de Lausanne

Wang, Yongwei

Zhejiang University

Date Issued

2025

Published in
IEEE Transactions on Information Forensics and Security
Subjects

adversarial robustness

•

causal theory

•

deep learning

•

information bottleneck

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
CVLAB  
Available on Infoscience
October 8, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/254767
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés