Reasoning with Latent Structure Refinement for Document-Level Relation Extraction
About
Document-level relation extraction requires integrating information within and across multiple sentences of a document and capturing complex interactions between inter-sentence entities. However, effective aggregation of relevant information in the document remains a challenging research question. Existing approaches construct static document-level graphs based on syntactic trees, co-references or heuristics from the unstructured text to model the dependencies. Unlike previous methods that may not be able to capture rich non-local interactions for inference, we propose a novel model that empowers the relational reasoning across sentences by automatically inducing the latent document-level graph. We further develop a refinement strategy, which enables the model to incrementally aggregate relevant information for multi-hop reasoning. Specifically, our model achieves an F1 score of 59.05 on a large-scale document-level dataset (DocRED), significantly improving over the previous results, and also yields new state-of-the-art results on the CDR and GDA dataset. Furthermore, extensive analyses show that the model is able to discover more accurate inter-sentence relations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document-level Relation Extraction | DocRED (dev) | F1 Score59 | 231 | |
| Document-level Relation Extraction | DocRED (test) | F1 Score59.05 | 179 | |
| Relation Extraction | DocRED (test) | F1 Score59.05 | 121 | |
| Relation Extraction | DocRED (dev) | F1 Score59 | 98 | |
| Relation Extraction | CDR (test) | F1 Score64.8 | 92 | |
| Dialogue Relation Extraction | DialogRE (test) | F144.4 | 69 | |
| Relation Extraction | DocRED v1 (test) | F159.05 | 66 | |
| Relation Extraction | DocRED v1 (dev) | F1 Score59 | 65 | |
| Relation Extraction | GDA (test) | F1 Score82.2 | 65 | |
| Document-level Relation Extraction | DocRED 1.0 (test) | F159.05 | 51 |