Overcoming Multi-model Forgetting

We identify a phenomenon, which we refer to as multi-model forgetting, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters. To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model’s shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search. Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks.


Published in:
ICML 2019 - Proceedings of the 36th International Conference on Machine Learning, 97, 594-603
Presented at:
ICML 2019 - 36th International Conference on Machine Learning, Long Beach, California, USA, June 09-15, 2019
Year:
2019
Publisher:
JMLR
Keywords:
Additional link:
Laboratories:




 Record created 2019-08-28, last modified 2019-09-09

Final:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)