Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Text Summarization with Pretrained Encoders

About

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm

Yang Liu, Mirella Lapata• 2019

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)
ROUGE-216.5
246
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L39.18
169
SummarizationXsum
ROUGE-216.5
108
SummarizationarXiv
ROUGE-219.67
76
SummarizationPubmed
ROUGE-149.1
70
SummarizationCNN Daily Mail
ROUGE-141.63
67
Text SummarizationCNN/Daily Mail (test)
ROUGE-220.34
65
SummarizationCNN/DM
ROUGE-143.78
56
Abstractive SummarizationCNN/Daily Mail non-anonymous (test)
ROUGE-141.72
52
SummarizationCNN/DailyMail (test)--
33
Showing 10 of 63 rows

Other info

Code

Follow for update