One-shot and few-shot learning of word embeddings
About
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Biomedical domain adaptation | Open Medical-LLM leaderboard | Macro Average68.9 | 84 | |
| Biomedical domain adaptation | Open Medical-LLM leaderboard macro-average | Macro Average Score68.9 | 75 | |
| Definition Generation | Biomedical domain tokens | Similarity Score60.2 | 75 | |
| Definition Generation | Multi-word tokens famous people, places, entities, sayings and concepts | Correctness71.6 | 66 | |
| Multiple-choice tasks | FrenchBench | Accuracy81.2 | 61 |