Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Language Modeling by Jointly Learning Syntax and Lexicon

About

We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.

Yikang Shen, Zhouhan Lin, Chin-Wei Huang, Aaron Courville• 2017

Related benchmarks

TaskDatasetResultRank
Language ModelingPTB (test)
Perplexity87.1
471
Language ModelingPenn Treebank (test)
Perplexity61.98
411
Language ModelingPenn Treebank (PTB) (test)
Perplexity62
120
Character-level Language ModelingPenn Treebank (test)
BPC1.202
113
Unsupervised ParsingPTB (test)
F1 Score47.9
75
Language ModelingPenn Treebank word-level (test)
Perplexity62
72
Unconditional Text GenerationEMNLP 2017 WMT News
Perplexity41.48
64
Unsupervised Constituency ParsingChinese Treebank (CTB) (test)
Unlabeled Sentence F1 (Mean)30.4
36
Grammar InductionPTB English (test)
F1 Score41.2
29
Unsupervised Constituency ParsingWSJ (test)
Max F147.9
29
Showing 10 of 26 rows

Other info

Follow for update