Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
About
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining in-domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multi-phase adaptive pretraining offers large gains in task performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Analysis | IMDB (test) | Accuracy95.79 | 248 | |
| Sentiment Analysis | SST-2 (test) | Accuracy96 | 136 | |
| Text Classification | AGNews | Accuracy93.9 | 119 | |
| Language model detoxification | RealToxicityPrompts (test) | Distinct-157 | 54 | |
| Language Modeling | (val) | Perplexity7.32 | 30 | |
| Toxicity Evaluation | RealToxicityPrompts | -- | 29 | |
| Sentiment Steering | OpenWebText Neutral to Negative (test) | Perplexity (PPL)32.86 | 27 | |
| Sentiment Steering | OpenWebText Neutral to Positive (test) | Perplexity (PPL)30.52 | 27 | |
| Sentiment Analysis | Amazon Reviews (test) | Average Accuracy90.78 | 24 | |
| Detoxification | RealToxicityPrompts | Avg Max Toxicity0.47 | 22 |