Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Communicating Agents for Abstractive Summarization

About

We present deep communicating agents in an encoder-decoder architecture to address the challenges of representing a long document for abstractive summarization. With deep communicating agents, the task of encoding a long text is divided across multiple collaborating agents, each in charge of a subsection of the input text. These encoders are connected to a single decoder, trained end-to-end using reinforcement learning to generate a focused and coherent summary. Empirical results demonstrate that multiple communicating encoders lead to a higher quality summary compared to several strong baselines, including those based on a single encoder or multiple non-communicating encoders.

Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi• 2018

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L37.92
169
Text SummarizationCNN/Daily Mail (test)
ROUGE-219.47
65
SummarizationCNN/Daily Mail original, non-anonymized (test)
ROUGE-141.69
54
Abstractive SummarizationCNN/Daily Mail non-anonymous (test)
ROUGE-141.69
52
Abstractive SummarizationCNN/DailyMail full length F-1 (test)
ROUGE-141.69
48
Extractive SummarizationCNN/Daily Mail (test)
ROUGE-141.69
36
SummarizationCNNDM full-length F1 (test)
ROUGE-141.69
19
Abstractive SummarizationNew York Times (test)
ROUGE-148.08
18
Abstractive SummarizationCNN/Daily Mail anonymized (test)
ROUGE-141.69
11
Abstractive SummarizationNew York Times F1 variants of Rouge (test)
ROUGE-148.08
10
Showing 10 of 11 rows

Other info

Follow for update