Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recurrent Neural Network Grammars

About

We introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure. We explain efficient inference procedures that allow application to both parsing and language modeling. Experiments show that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.

Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, Noah A. Smith• 2016

Related benchmarks

TaskDatasetResultRank
Language ModelingPTB (test)
Perplexity80.6
471
Language ModelingPenn Treebank (test)
Perplexity102.4
411
Constituent ParsingPTB (test)
F193.3
127
Language ModelingPenn Treebank (PTB) (test)
Perplexity88.7
120
Dependency ParsingChinese Treebank (CTB) (test)
UAS85.5
99
Unsupervised ParsingPTB (test)
F1 Score71.9
75
Phrase-structure parsingPTB (§23)
F1 Score93.3
56
Constituency ParsingPenn Treebank WSJ (section 23 test)
F1 Score93.3
55
Constituent ParsingCTB (test)
F1 Score86.9
45
Grammar InductionPTB English (test)
F1 Score68.1
29
Showing 10 of 21 rows

Other info

Code

Follow for update