Efficient Estimation of Word Representations in Vector Space
About
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean• 2013
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Classification | IMDB (test) | Error Rate10.81 | 144 | |
| Subjectivity Classification | Subj (test) | Accuracy91.3 | 125 | |
| Sentiment Analysis | CR | Accuracy80.9 | 123 | |
| Text Classification | 20News | Accuracy82.2 | 101 | |
| Chunking | CoNLL 2000 (test) | F1 Score88.07 | 88 | |
| Semantic Relatedness | SICK 2014 (test) | Pearson's r0.7577 | 56 | |
| Named Entity Recognition | OntoNotes 4.0 (test) | F1 Score83.9 | 55 | |
| Word Similarity | WS-353 | Spearman Correlation (WS-353)0.7141 | 54 | |
| Text Classification | R8 | Accuracy96.3 | 54 | |
| Part-of-Speech Tagging | WSJ (test) | Accuracy95.12 | 51 |
Showing 10 of 118 rows
...