Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Recent transformer language models achieve outstanding results in many natural language processing (NLP) tasks. However, their enormous size often makes them impractical on memory-constrained devices, requiring practitioners to compress them to smaller networks. In this paper, we explore offline compression methods, meaning computationally-cheap approaches that do not require further finetuning of the compressed model. We challenge the classical matrix factorization methods by proposing a novel, better-performing autoencoder-based framework. We perform a comprehensive ablation study of our approach, examining its different aspects over a diverse set of evaluation settings. Moreover, we show that enabling collaboration between modules across layers by compressing certain modules together positively impacts the final model performance. Experiments on various NLP tasks demonstrate that our approach significantly outperforms commonly used factorization-based offline compression methods.
WOS:001181085100131
2023-01-01
978-1-959429-47-0
Stroudsburg
1788
1805
REVIEWED
Event name | Event place | Event date |
Dubrovnik, CROATIA | MAY 02-06, 2023 | |
Funder | Grant Number |
National Centre of Science (Poland) | 2019/33/B/ST6/00894 |
Natural Sciences at the Jagiellonian University | POIR.04.04.00-00-14DE/18-00 |
Foundation for Polish Science - European Union under the European Regional Development Fund | |