DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
About
As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Node Classification | Cora (test) | Mean Accuracy8.4 | 687 | |
| Language Modeling | WikiText-103 (test) | Perplexity23.7 | 524 | |
| Natural Language Understanding | GLUE (dev) | SST-2 (Acc)92.7 | 504 | |
| Natural Language Understanding | GLUE | SST-292.7 | 452 | |
| Natural Language Understanding | GLUE (test) | SST-2 Accuracy93.1 | 416 | |
| Question Answering | SQuAD v1.1 (dev) | F1 Score86.9 | 375 | |
| Sentiment Analysis | IMDB (test) | Accuracy92.9 | 248 | |
| Natural Language Understanding | GLUE (val) | SST-287.7 | 170 | |
| Question Answering | SQuAD v2.0 (dev) | F169.5 | 158 | |
| Link Prediction | Citeseer | -- | 146 |