Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Deep Reinforced Model for Abstractive Summarization

About

Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.

Romain Paulus, Caiming Xiong, Richard Socher• 2017

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L36.9
169
Text SummarizationCNN/Daily Mail (test)
ROUGE-215.82
65
SummarizationCNN/Daily Mail original, non-anonymized (test)
ROUGE-139.87
54
Abstractive SummarizationCNN/Daily Mail non-anonymous (test)
ROUGE-139.87
52
Abstractive SummarizationCNN/DailyMail full length F-1 (test)
ROUGE-141.16
48
Abstractive SummarizationCNN/DailyMail
ROUGE-138.3
25
Email Subject Line GenerationAESLC (dev)
ROUGE-115.12
21
Email Subject Line GenerationAESLC (test)
ROUGE-114.56
21
Extractive SummarizationNYT50 (test)
ROUGE-142.94
21
SummarizationCNNDM full-length F1 (test)
ROUGE-139.87
19
Showing 10 of 18 rows

Other info

Follow for update