Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GSum: A General Framework for Guided Neural Abstractive Summarization

About

Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control. While previous studies attempt to provide different types of guidance to control the output and increase faithfulness, it is not clear how these strategies compare and contrast to each other. In this paper, we propose a general and extensible guided summarization framework (GSum) that can effectively take different kinds of external guidance as input, and we perform experiments across several different varieties. Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets when using highlighted sentences as guidance. In addition, we show that our guided model can generate more faithful summaries and demonstrate how different types of guidance generate qualitatively different summaries, lending a degree of controllability to the learned models.

Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig• 2020

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)
ROUGE-221.89
231
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L42.48
169
SummarizationCNNDM (test)
ROUGE 222.32
20
SummarizationENTSUM 1.0 (test)
ROUGE-140.29
13
Abstractive SummarizationCNNDM (test)
ROUGE-145.94
11
Self-introduction generationTwitter (test)
ROUGE-1 Score22.19
11
Text SummarizationCNNDM (test)
1st Percent11.11
4
Self-introduction generationTwitter Self-introduction (test)
Fluency3.43
3
Showing 8 of 8 rows

Other info

Follow for update