Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
 
conference paper not in proceedings

MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks

Swamy, Vinitra  
•
Satayeva, Malika
•
Frej, Jibril  
Show more
2023
37th Conference on Neural Information Processing Systems (NeurIPS)

Predicting multiple real-world tasks in a single model often requires a particularly diverse feature space. Multimodal (MM) models aim to extract the synergistic predictive potential of multiple data types to create a shared feature space with aligned semantic meaning across inputs of drastically varying sizes (i.e. images, text, sound). Most current MM architectures fuse these representations in parallel, which not only limits their interpretability but also creates a dependency on modality availability. We present MultiModN, a multimodal, modular network that fuses latent representations in a sequence of any number, combination, or type of modality while providing granular real-time predictive feedback on any number or combination of predictive tasks. MultiModN's composable pipeline is interpretable-by-design, as well as innately multi-task and robust to the fundamental issue of biased missingness. We perform four experiments on several benchmark MM datasets across 10 real-world tasks (predicting medical diagnoses, academic performance, and weather), and show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion. By simulating the challenging bias of missing not-at-random (MNAR), this work shows that, contrary to MultiModN, parallel fusion baselines erroneously learn MNAR and suffer catastrophic failure when faced with different patterns of MNAR at inference. To the best of our knowledge, this is the first inherently MNAR-resistant approach to MM modeling. In conclusion, MultiModN provides granular insights, robustness, and flexibility without compromising performance.

  • Files
  • Details
  • Metrics
Type
conference paper not in proceedings
DOI
10.48550/arxiv.2309.14118
Author(s)
Swamy, Vinitra  
Satayeva, Malika
Frej, Jibril  
Bossy, Thierry  
Vogels, Thijs  
Jaggi, Martin  
Käser, Tanja  
Hartley, Mary-Anne  
Date Issued

2023

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
MLO  
ML4ED  
AVP-E-LEARN  
Event nameEvent placeEvent date
37th Conference on Neural Information Processing Systems (NeurIPS)

New Orleans, US

December 10-16, 2023

Available on Infoscience
September 27, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/201077
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés