Foroutan, NegarBanaei, MohammadrezaLebret, Rémi PhilippeBosselut, AntoineAberer, Karl2025-03-102025-03-102025-03-082022-12-0110.18653/v1/2022.emnlp-main.513https://infoscience.epfl.ch/handle/20.500.14299/247673Multilingual pre-trained language models transfer remarkably well on cross-lingual downstream tasks. However, the extent to which they learn language-neutral representations (i.e., shared representations that encode similar phenomena across languages), and the effect of such representations on cross-lingual transfer performance, remain open questions. In this work, we conceptualize language neutrality of multilingual models as a function of the overlap between language-encoding subnetworks of these models. We employ the lottery ticket hypothesis to discover sub-networks that are individually optimized for various languages and tasks. Our evaluation across three distinct tasks and eleven typologically-diverse languages demonstrates that sub-networks for different languages are topologically similar (i.e., language-neutral), making them effective initializations for cross-lingual transfer with limited performance degradation. 1enDiscovering Language-neutral Sub-networks in Multilingual Language Modelstext::conference output::conference proceedings::conference paper