Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bottom-Up Abstractive Summarization

About

Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content selector to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences, making it easy to transfer a trained summarizer to a new domain.

Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush• 2018

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L38.34
169
SummarizationCNN Daily Mail
ROUGE-141.22
67
Text SummarizationCNN/Daily Mail (test)
ROUGE-218.68
65
SummarizationCNN/Daily Mail original, non-anonymized (test)
ROUGE-141.22
54
Abstractive SummarizationCNN/Daily Mail non-anonymous (test)
ROUGE-141.22
52
Multi-document summarizationMulti-News (test)
ROUGE-214.19
45
SummarizationCNNDM full-length F1 (test)
ROUGE-141.22
19
Abstractive SummarizationNew York Times (test)
ROUGE-147.38
18
SummarizationCNN/Daily Mail full length (test)
ROUGE-141.22
18
Text SummarizationCNNDM
ROUGE-218.68
11
Showing 10 of 14 rows

Other info

Code

Follow for update