Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization
About
This paper tackles the reduction of redundant repeating generation that is often observed in RNN-based encoder-decoder models. Our basic idea is to jointly estimate the upper-bound frequency of each target vocabulary in the encoder and control the output words based on the estimation in the decoder. Our method shows significant improvement over a strong RNN-based encoder-decoder baseline and achieved its best results on an abstractive summarization benchmark.
Jun Suzuki, Masaaki Nagata• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Summarization | DUC 2004 (test) | ROUGE-132.28 | 115 | |
| Text Summarization | Gigaword (test) | ROUGE-136.3 | 75 | |
| Summarization | DUC 75-byte limit 2004 | ROUGE-1 Recall32.28 | 6 | |
| Summarization | Gigaword two length limit | ROUGE-1 F-Score36.3 | 6 |
Showing 4 of 4 rows