Text Summarization with Pretrained Encoders
About
Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Summarization | XSum (test) | ROUGE-216.5 | 231 | |
| Abstractive Text Summarization | CNN/Daily Mail (test) | ROUGE-L39.18 | 169 | |
| Summarization | Xsum | ROUGE-216.5 | 108 | |
| Summarization | arXiv | ROUGE-219.67 | 76 | |
| Summarization | Pubmed | ROUGE-149.1 | 70 | |
| Summarization | CNN Daily Mail | ROUGE-141.63 | 67 | |
| Text Summarization | CNN/Daily Mail (test) | ROUGE-220.34 | 65 | |
| Summarization | CNN/DM | ROUGE-143.78 | 56 | |
| Abstractive Summarization | CNN/Daily Mail non-anonymous (test) | ROUGE-141.72 | 52 | |
| Extractive Summarization | PubMed (test) | ROUGE-143.33 | 32 |