Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LongT5: Efficient Text-To-Text Transformer for Long Sequences

About

Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training (PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global} (TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on question answering tasks.

Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang• 2021

Related benchmarks

TaskDatasetResultRank
SummarizationarXiv (test)
ROUGE-148.35
161
SummarizationPubMed (test)
ROUGE-150.23
107
SummarizationarXiv
ROUGE-221.92
76
Question AnsweringNatural Question (NQ) (dev)
F166.61
72
SummarizationPubmed
ROUGE-150.23
70
SummarizationCNN Daily Mail
ROUGE-143.94
67
Text SummarizationCNN/Daily Mail (test)
ROUGE-221.4
65
SummarizationbigPatent
ROUGE-176.87
61
Question AnsweringNarrativeQA (test)--
61
Abstractive SummarizationMulti-News
ROUGE-219.43
47
Showing 10 of 28 rows

Other info

Code

Follow for update