Movement Pruning: Adaptive Sparsity by Fine-Tuning
About
Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural Language Understanding | GLUE (dev) | -- | 504 | |
| Natural Language Understanding | GLUE | -- | 452 | |
| Question Answering | SQuAD v1.1 (dev) | F1 Score84.9 | 375 | |
| Image Classification | ImageNet-1K | Top-1 Accuracy82.1 | 137 | |
| Question Answering | SQuAD | F187.6 | 127 | |
| Natural Language Inference | MNLI (matched) | Accuracy81.2 | 110 | |
| Natural Language Inference | MNLI | Accuracy (matched)82.5 | 80 | |
| Paraphrase Identification | QQP | Accuracy91 | 78 | |
| Natural Language Inference | MNLI (mismatched) | Accuracy81.8 | 68 | |
| Natural Language Inference | MNLI (test) | Accuracy0.812 | 38 |