Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space

About

There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours.

Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, Andrew McCallum• 2015

Related benchmarks

TaskDatasetResultRank
Word Sense InductionSemEval WSI 2010
V-Measure (All)4.6
9
Word Sense InductionSemEval-2010 WSI 80-20 split
SR (All)58.6
8
Showing 2 of 2 rows

Other info

Follow for update