Prompt-Based Metric Learning for Few-Shot NER
About
Few-shot named entity recognition (NER) targets generalizing to unseen labels and/or domains with few labeled examples. Existing metric learning methods compute token-level similarities between query and support sets, but are not able to fully incorporate label semantics into modeling. To address this issue, we propose a simple method to largely improve metric learning for NER: 1) multiple prompt schemas are designed to enhance label semantics; 2) we propose a novel architecture to effectively combine multiple prompt-based representations. Empirically, our method achieves new state-of-the-art (SOTA) results under 16 of the 18 considered settings, substantially outperforming the previous SOTA by an average of 8.84% and a maximum of 34.51% in relative gains of micro F1. Our code is available at https://github.com/AChen-qaq/ProML.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | CoNLL 03 | -- | 102 | |
| Named Entity Recognition | Wnut 2017 | -- | 79 | |
| Named Entity Recognition | FewNERD INTRA | -- | 47 | |
| Named Entity Recognition | GUM | Micro F136.99 | 36 | |
| Named Entity Recognition | CoNLL (test) | -- | 28 | |
| Named Entity Recognition | i2b2 2014 | Micro F1 Score0.5821 | 26 | |
| Named Entity Recognition | OntoNotes Onto-A 5.0 | Micro F152.46 | 26 | |
| Named Entity Recognition | OntoNotes Onto-B 5.0 | Micro-F169.69 | 26 | |
| Named Entity Recognition | OntoNotes Onto-C 5.0 | Micro F167.58 | 26 | |
| Named Entity Recognition | Few-NERD Arxiv V6 (test) | 1-shot INTRA Score56.49 | 12 |