Liu, YuejiangAlahi, AlexandreRussell, ChrisHorn, MaxZietlow, DominikSchölkopf, BernhardLocatello, Francesco2023-04-252023-04-252023-04-252023-04https://infoscience.epfl.ch/handle/20.500.14299/197195Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions. Yet, existing efforts are largely limited to simple synthetic settings that are far away from real-world problems. In this paper, we present Causal Triplet, a causal representation learning benchmark featuring not only visually more complex scenes, but also two crucial desiderata commonly overlooked in previous works: (i) an actionable counterfactual setting, where only certain object-level variables allow for counterfactual observations whereas others do not; (ii) an interventional downstream task with an emphasis on out-of-distribution robustness from the independent causal mechanisms principle. Through extensive experiments, we find that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts. However, recent causal representation learning methods still struggle to identify such latent structures, indicating substantial challenges and opportunities for future work. Our code and datasets will be available at https://sites.google.com/view/causaltriplet.Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learningtext::conference output::conference paper not in proceedings