Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adapting Pretrained Text-to-Text Models for Long Text Sequences

About

We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -- model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. Our code has been released at https://github.com/facebookresearch/bart_ls.

Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Wen-tau Yih• 2022

Related benchmarks

TaskDatasetResultRank
SummarizationPubMed (test)
ROUGE-150.3
107
Question AnsweringNarrativeQA (test)--
61
Document SummarizationGovReport (test)
ROUGE-162
50
Query-based meeting summarizationQMSum (test)
ROUGE-137.9
26
Long document summarizationarXiv (test)
ROUGE-2 Score22.1
24
SummarizationBookSum Chapter Level
ROUGE-138.5
14
Question AnsweringQASPER Extractive (test)
F148.7
8
Query Focused SummarizationQMSum (test)
ROUGE-137.9
7
Dialogue SummarizationTVMegaSite
ROUGE-151.8
6
Narrative SummarizationForeverDreaming
ROUGE-139.1
6
Showing 10 of 14 rows

Other info

Code

Follow for update