Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Compute Word Embeddings On the Fly

About

Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the "long tail" of this distribution requires enormous amounts of data. Representations of rare words trained directly on end tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained end-to-end for the downstream task. We show that this improves results against baselines where embeddings are trained on the end task for reading comprehension, recognizing textual entailment and language modeling.

Dzmitry Bahdanau, Tom Bosc, Stanis{\l}aw Jastrz\k{e}bski, Edward Grefenstette, Pascal Vincent, Yoshua Bengio• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy84.39
681
Question AnsweringSQuAD (test)--
111
Question AnsweringSQuAD (dev)--
74
Natural Language InferenceSNLI (dev)
Accuracy84.88
71
Natural Language InferenceMultiNLI matched (test)
Accuracy71.45
65
Natural Language InferenceMultiNLI mismatched (test)
Accuracy70.7
56
Natural Language InferenceMultiNLI matched (dev)
Accuracy71.39
23
Language ModelingOne Billion Word (OBW) 1% train set (test)
PPL66.23
11
Language ModelingOne Billion Word (OBW) 100% train set (test)
PPL39.56
11
Natural Language InferenceMultiNLI mismatched (dev)
Accuracy71.65
11
Showing 10 of 10 rows

Other info

Follow for update