Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

About

Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.

Xingxing Zhang, Furu Wei, Ming Zhou• 2019

Related benchmarks

TaskDatasetResultRank
Text SummarizationCNN/Daily Mail (test)
ROUGE-219.95
65
SummarizationCNN/DM
ROUGE-142.37
56
Extractive SummarizationCNN/Daily Mail (test)
ROUGE-130
36
Extractive SummarizationNYT50 (test)
ROUGE-149.47
21
SummarizationCNNDM full-length F1 (test)
ROUGE-142.37
19
SummarizationCNN/Daily Mail full length (test)
ROUGE-142.37
18
Extractive SummarizationCNN-DM (test)
ROUGE-142.37
18
Document ClassificationMIND (test)
Accuracy0.8189
12
Document ClassificationIMDB (test)
Accuracy52.96
10
SummarizationPubMed Short
ROUGE-142.03
6
Showing 10 of 11 rows

Other info

Follow for update