Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Transformers for Multi-Document Summarization

About

In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill Transformer architecture with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines.

Yang Liu, Mirella Lapata• 2019

Related benchmarks

TaskDatasetResultRank
Multi-document summarizationWikiSUM (test)
ROUGE-141.53
14
Abstractive SummarizationWikiCatSum Company (test)
Completeness2.96
5
Abstractive SummarizationWikiCatSum Film (test)
Completeness3.13
5
Abstractive SummarizationWikiCatSum Animal (test)
Completeness2.8
5
Wikipedia Abstract GenerationWikiCatSum Film (test)
ROUGE-124.6
5
Wikipedia Abstract GenerationWikiCatSum Company (test)
ROUGE-10.133
5
Wikipedia Abstract GenerationWikiCatSum Animal (test)
ROUGE-116.5
5
Showing 7 of 7 rows

Other info

Follow for update