Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ETC: Encoding Long and Structured Inputs in Transformers

About

Transformer models have advanced the state of the art in many Natural Language Processing (NLP) tasks. In this paper, we present a new Transformer architecture, Extended Transformer Construction (ETC), that addresses two key challenges of standard Transformer architectures, namely scaling input length and encoding structured inputs. To scale attention to longer inputs, we introduce a novel global-local attention mechanism between global tokens and regular input tokens. We also show that combining global-local attention with relative position encodings and a Contrastive Predictive Coding (CPC) pre-training objective allows ETC to encode structured inputs. We achieve state-of-the-art results on four natural language datasets requiring long and/or structured inputs.

Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang• 2020

Related benchmarks

TaskDatasetResultRank
Question AnsweringNatural Question (NQ) (dev)--
72
Multi-hop Question AnsweringHotpotQA (dev)
Answer F181.3
43
Question AnsweringHotpotQA (dev)--
43
Question AnsweringHotpotQA distractor setting (test)
Answer F181.2
34
Question AnsweringHybridQA (test)--
23
Question AnsweringHybridQA (dev)--
17
Keyphrase ExtractionOpenKP (dev)
F1@344.06
13
Multi-hop Question AnsweringWikihop (dev)
Accuracy79.8
10
Question AnsweringQASPER Extractive (dev)
F124.6
8
Question AnsweringQASPER Extractive (test)
F127
8
Showing 10 of 18 rows

Other info

Code

Follow for update