Semi-supervised Word Sense Disambiguation with Neural Models
About
Determining the intended sense of words in text - word sense disambiguation (WSD) - is a long standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Word Sense Disambiguation | SemEval-2007 Task 17 (test) | F1 Score63.7 | 36 | |
| Word Sense Disambiguation | SemEval Task 7 (S7-T7) 2007 (test) | F1 Score84.3 | 29 | |
| Word Sense Disambiguation | Senseval-3 English all-words | F1 Score71.8 | 14 | |
| Word Sense Disambiguation | SemEval English 2015 (test) | F1 (all)72.6 | 13 | |
| Word Sense Disambiguation | SemEval Task 17 all-words 2007 | F1 (Nouns)72.3 | 9 | |
| Word Sense Disambiguation | Senseval2 all-words | F1 (All Words)74.4 | 8 | |
| Word Sense Disambiguation | SemEval Coarse-grained 2007 (all-words) | F1 (Nouns)0.834 | 8 | |
| Word Sense Disambiguation | SemEval-2013 Task 12 (nouns) | F1 (Nouns)69.5 | 7 |