Zero-Shot Relation Extraction via Reading Comprehension
About
We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Reading Comprehension | SQuAD 2.0 (dev) | EM59.8 | 57 | |
| Machine Reading Comprehension | SQuAD 2.0 (test) | EM59.2 | 51 | |
| Zero-shot Relation Extraction | Wiki ZSL m=5 (test) | Precision48.58 | 7 | |
| Zero-shot Relation Extraction | Wiki-ZSL m=10 (test) | Precision44.12 | 7 | |
| Zero-shot Relation Extraction | Wiki-ZSL m=15 (test) | Precision (%)27.31 | 7 | |
| Zero-shot Relation Extraction | FewRel m=5 (test) | Precision56.27 | 7 | |
| Zero-shot Relation Extraction | FewRel m=10 (test) | Precision42.89 | 7 | |
| Zero-shot Relation Extraction | FewRel m=15 (test) | Precision29.15 | 7 |