A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors
About
Motivations like domain adaptation, transfer learning, and feature learning have fueled interest in inducing embeddings for rare or unseen words, n-grams, synsets, and other textual features. This paper introduces a la carte embedding, a simple and general alternative to the usual word2vec-based approaches for building such representations that is based upon recent theoretical results for GloVe-like embeddings. Our method relies mainly on a linear transformation that is efficiently learnable using pretrained word vectors and linear regression. This transform is applicable on the fly in the future when a new text feature or rare word is encountered, even if only a single usage example is available. We introduce a new dataset showing how the a la carte method requires fewer examples of words in context to learn high-quality embeddings and we obtain state-of-the-art results on a nonce task and some unsupervised document classification tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Subjectivity Classification | Subj | Accuracy93.8 | 266 | |
| Text Classification | TREC | Accuracy89 | 179 | |
| Sentiment Classification | CR | Accuracy84.3 | 142 | |
| Text Classification | IMDB | Accuracy90.9 | 107 | |
| Text Classification | MR | Accuracy81.8 | 93 | |
| Text Classification | SST binary | Accuracy86.7 | 29 | |
| Text Classification | MPQA | Accuracy87.6 | 25 | |
| Few-shot embedding induction | Chimera 1.0 (test) | Spearman Correlation0.3941 | 15 | |
| Text Classification | SST fine-grained | Accuracy48.1 | 10 | |
| Word Sense Disambiguation | SemEval-2013 Task 12 (nouns) | -- | 7 |