Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
About
We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on task-specific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the task-specific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pre-training task of predicting which relational information corresponds to which input text is an effective way to produce task-specific outputs. This enables the zero-shot transfer of our framework to downstream tasks. We study the zero-shot performance of this framework on open information extraction (OIE2016, NYT, WEB, PENN), relation classification (FewRel and TACRED), and factual probe (Google-RE and T-REx). The model transfers non-trivially to most tasks and is often competitive with a fully supervised method without the need for any task-specific training. For instance, we significantly outperform the F1 score of the supervised open information extraction without needing to use its training set.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Relation Extraction | TACRED | Micro F176.4 | 97 | |
| Open Information Extraction | OIE 2016 | F1 Score72.6 | 18 | |
| Open Information Extraction | WEB | F191.2 | 18 | |
| Open Information Extraction | NYT | F1 Score85.5 | 18 | |
| Open Information Extraction | PENN | F1 Score88.5 | 18 | |
| Text2KG | CE12k (test) | G-BLEU6.32 | 8 | |
| Factual Probe | Google-RE | -- | 7 | |
| Relation Classification | FewRel 1.0 (dev) | F1 (5-way 1-shot)92.9 | 6 | |
| Factual Probe | T-REx | P@1 (Total)66 | 6 |