Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network
About
In this paper, we explore the slot tagging with only a few labeled support sentences (a.k.a. few-shot). Few-shot slot tagging faces a unique challenge compared to the other few-shot classification problems as it calls for modeling the dependencies between labels. But it is hard to apply previously learned label dependencies to an unseen domain, due to the discrepancy of label sets. To tackle this, we introduce a collapsed dependency transfer mechanism into the conditional random field (CRF) to transfer abstract label dependency patterns as transition scores. In the few-shot setting, the emission score of CRF can be calculated as a word's similarity to the representation of each label. To calculate such similarity, we propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model -- TapNet, by leveraging label name semantics in representing labels. Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Named Entity Recognition | CoNLL 03 | -- | 102 | |
| Named Entity Recognition | Conll 2003 | F1 Score83.97 | 86 | |
| Named Entity Recognition | OntoNotes 5.0 | F1 Score52.35 | 79 | |
| Named Entity Recognition | Wnut 2017 | -- | 79 | |
| Few-shot Named Entity Recognition | FewNERD Intra 1.0 | F1 Score41.93 | 44 | |
| Named Entity Recognition | GUM | Micro F127.5 | 36 | |
| Named Entity Recognition | i2b2 2014 | Micro F1 Score0.4789 | 26 | |
| Named Entity Recognition | OntoNotes Onto-C 5.0 | Micro F145.24 | 26 | |
| Named Entity Recognition | OntoNotes Onto-B 5.0 | Micro-F141.97 | 26 | |
| Named Entity Recognition | OntoNotes Onto-A 5.0 | Micro F121.48 | 26 |