Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enriching Word Vectors with Subword Information

About

Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character $n$-grams. A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.

Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov• 2016

Related benchmarks

TaskDatasetResultRank
Sentiment AnalysisCR
Accuracy81.9
123
Word SimilarityWS-353
Spearman Correlation (WS-353)0.746
54
Word SimilarityRG-65
Spearman Correlation0.808
35
Word SimilarityRG-65 (test)
Spearman Correlation0.7669
33
Word SimilarityWS-353 REL (test)
Spearman Correlation0.616
28
Word SimilaritySimLex-999
Spearman Correlation38.2
23
Natural Language InferenceRONLI (test)
Micro F1 Score66
18
Tweet ClassificationTweetEval 1.0 (test)
Emoji (M-F1)25.8
18
Word SimilarityWS-353 (test)
Spearman Correlation0.596
18
Clinical concept extractionSemeval Task 14 2015 (test)
Exact F10.7785
14
Showing 10 of 62 rows

Other info

Code

Follow for update