Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
About
We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism -- one that is independent of composed syntactic representations -- plays an important role in current successful models of long text.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Summarization | Xsum | ROUGE-29.47 | 108 | |
| Language Modeling | BLLIP-LG (test) | PPL14.4 | 35 | |
| Syntactic Generalization | SG | SG Score82.5 | 24 | |
| Syntactic Evaluation | SyntaxGym Overall | Accuracy82.5 | 14 | |
| Dialogue | DailyDialog | R-114.99 | 10 | |
| Syntactic Generalization | BLiMP (test) | BLiMP Accuracy0.773 | 8 | |
| Parse Reranking | Penn Treebank (PTB) CoreNLP 3.3.0 converted (test) | UAS0.97 | 3 |