Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Distributed Representations of Words and Phrases and their Compositionality

About

The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.

Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean• 2013

Related benchmarks

TaskDatasetResultRank
Subjectivity ClassificationSubj
Accuracy89.2
329
Question ClassificationTREC
Accuracy82.2
259
Sentiment ClassificationMR
Accuracy73.6
148
Sentiment ClassificationIMDB (test)
Error Rate7.29
144
Sentiment ClassificationCR
Accuracy77.3
142
Word SimilarityWordSim-353
Spearman Rho0.668
114
Text-to-SQLSpider (dev)--
100
Named Entity RecognitionCoNLL Spanish NER 2002 (test)
F1 Score72.05
98
Named Entity RecognitionCoNLL Dutch 2002 (test)
F1 Score61.67
87
Zero-shot LearningSUN (unseen)
Top-1 Accuracy (%)39.6
50
Showing 10 of 84 rows
...

Other info

Follow for update