Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization

About

This paper presents Z-Code++, a new pre-trained language model optimized for abstractive text summarization. The model extends the state of the art encoder-decoder model using three techniques. First, we use a two-phase pre-training process to improve model's performance on low-resource summarization tasks. The model is first pre-trained using text corpora for language understanding, and then is continually pre-trained on summarization corpora for grounded text generation. Second, we replace self-attention layers in the encoder with disentangled attention layers, where each word is represented using two vectors that encode its content and position, respectively. Third, we use fusion-in-encoder, a simple yet effective method of encoding long sequences in a hierarchical manner. Z-Code++ creates new state of the art on 9 out of 13 text summarization tasks across 5 languages. Our model is parameter-efficient in that it outperforms the 600x larger PaLM-540B on XSum, and the finetuned 200x larger GPT3-175B on SAMSum. In zero-shot and few-shot settings, our model substantially outperforms the competing models.

Pengcheng He, Baolin Peng, Liyang Lu, Song Wang, Jie Mei, Yang Liu, Ruochen Xu, Hany Hassan Awadalla, Yu Shi, Chenguang Zhu, Wayne Xiong, Michael Zeng, Jianfeng Gao, Xuedong Huang• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.5
504
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy97.9
416
SummarizationXSum (test)
ROUGE-224.7
231
Dialogue SummarizationSamSum (test)
ROUGE-230.3
80
Natural language generationE2E (test)
ROUGE-L54
79
Abstractive dialogue summarizationSamSum (test)
ROUGE-L43.9
53
Multi-document summarizationMulti-News (test)
ROUGE-221.6
45
Abstractive SummarizationXSum (test)
ROUGE-L33.6
44
SummarizationNewsroom (test)
ROUGE-233.1
40
Long document summarizationarXiv (test)
ROUGE-2 Score22.5
24
Showing 10 of 23 rows

Other info

Follow for update