Recurrent Neural Network Grammars
About
We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.
Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, Noah A. Smith• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | PTB (test) | Perplexity80.6 | 471 | |
| Language Modeling | Penn Treebank (test) | Perplexity102.4 | 411 | |
| Constituent Parsing | PTB (test) | F193.3 | 127 | |
| Language Modeling | Penn Treebank (PTB) (test) | Perplexity88.7 | 120 | |
| Dependency Parsing | Chinese Treebank (CTB) (test) | UAS85.5 | 99 | |
| Unsupervised Parsing | PTB (test) | F1 Score71.9 | 75 | |
| Phrase-structure parsing | PTB (§23) | F1 Score93.3 | 56 | |
| Constituency Parsing | Penn Treebank WSJ (section 23 test) | F1 Score93.3 | 55 | |
| Constituent Parsing | CTB (test) | F1 Score86.9 | 45 | |
| Grammar Induction | PTB English (test) | F1 Score68.1 | 29 |
Showing 10 of 21 rows