Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale

About

We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism -- one that is independent of composed syntactic representations -- plays an important role in current successful models of long text.

Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Milo\v{s} Stanojevi\'c, Phil Blunsom, Chris Dyer• 2022

Related benchmarks

TaskDatasetResultRank
SummarizationXsum
ROUGE-29.47
108
Language ModelingBLLIP-LG (test)
PPL14.4
35
Syntactic GeneralizationSG
SG Score82.5
24
Syntactic EvaluationSyntaxGym Overall
Accuracy82.5
14
DialogueDailyDialog
R-114.99
10
Syntactic GeneralizationBLiMP (test)
BLiMP Accuracy0.773
8
Parse RerankingPenn Treebank (PTB) CoreNLP 3.3.0 converted (test)
UAS0.97
3
Showing 7 of 7 rows

Other info

Follow for update