Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

About

Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification framework, where the model is trained against one-hot targets, and each word is represented both as an input and as an output in isolation. This causes inefficiencies in learning both in terms of utilizing all of the information and in terms of the number of parameters needed to train. We introduce a novel theoretical framework that facilitates better learning in language modeling, and show that our framework leads to tying together the input embedding and the output projection matrices, greatly reducing the number of trainable variables. Our framework leads to state of the art performance on the Penn Treebank with a variety of network models.

Hakan Inan, Khashayar Khosravi, Richard Socher• 2016

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL82
1541
Language ModelingWikiText-103 (test)
Perplexity48.7
524
Language ModelingPTB (test)
Perplexity48.7
471
Language ModelingPenn Treebank (test)
Perplexity66
411
Language ModelingWikiText2 v1 (test)
Perplexity87
341
Language ModelingWikiText2 (val)
Perplexity (PPL)91.5
277
Language ModelingPenn Treebank (val)
Perplexity68.1
178
Language ModelingPenn Treebank (PTB) (test)
Perplexity68.5
120
Language ModelingPTB (val)
Perplexity75.7
83
Language ModelingPenn Treebank word-level (test)
Perplexity68.5
72
Showing 10 of 15 rows

Other info

Follow for update