Banaei, MohammadrezaBalazy, KlaudiaKasymov, ArturLebret, RemiTabor, JacekAberer, Karl2024-05-012024-05-012024-05-012023-01-0110.18653/v1/2023.findings-eacl.133https://infoscience.epfl.ch/handle/20.500.14299/207595WOS:001181085100131Recent transformer language models achieve outstanding results in many natural language processing (NLP) tasks. However, their enormous size often makes them impractical on memory-constrained devices, requiring practitioners to compress them to smaller networks. In this paper, we explore offline compression methods, meaning computationally-cheap approaches that do not require further finetuning of the compressed model. We challenge the classical matrix factorization methods by proposing a novel, better-performing autoencoder-based framework. We perform a comprehensive ablation study of our approach, examining its different aspects over a diverse set of evaluation settings. Moreover, we show that enabling collaboration between modules across layers by compressing certain modules together positively impacts the final model performance. Experiments on various NLP tasks demonstrate that our approach significantly outperforms commonly used factorization-based offline compression methods.TechnologyRevisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Modelstext::conference output::conference proceedings::conference paper