Fine-tune Bert for DocRED with Two-step Process
About
Modelling relations between multiple entities has attracted increasing attention recently, and a new dataset called DocRED has been collected in order to accelerate the research on the document-level relation extraction. Current baselines for this task uses BiLSTM to encode the whole document and are trained from scratch. We argue that such simple baselines are not strong enough to model to complex interaction between entities. In this paper, we further apply a pre-trained language model (BERT) to provide a stronger baseline for this task. We also find that solving this task in phases can further improve the performance. The first step is to predict whether or not two entities have a relation, the second step is to predict the specific relation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document-level Relation Extraction | DocRED (dev) | F1 Score58.83 | 231 | |
| Document-level Relation Extraction | DocRED (test) | F1 Score58.69 | 179 | |
| Relation Extraction | DocRED (test) | F1 Score56.5 | 121 | |
| Relation Extraction | DocRED (dev) | F1 Score55.4 | 98 | |
| Relation Extraction | DocRED v1 (test) | F158.69 | 66 | |
| Relation Extraction | DocRED v1 (dev) | F1 Score58.83 | 65 | |
| Document-level Relation Extraction | DocRED 1.0 (test) | F158.69 | 51 | |
| Document-level Relation Extraction | DocRED 1.0 (dev) | F154.42 | 42 | |
| Relation Extraction | NYT (test) | P@10082 | 9 |