Lin, TaoStich, Sebastian UrbanBarba Flores, Luis FelipeDmitriev, DaniilJaggi, Martin2020-05-152020-05-152020-05-152020https://infoscience.epfl.ch/handle/20.500.14299/168762Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate the method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models and further that their performance surpasses all previously proposed pruning schemes (that come without feedback mechanisms).optimizationdeep learningnetwork pruningdynamic reparameterizationmodel compressionDynamic Model Pruning with Feedbacktext::conference output::conference proceedings::conference paper