Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Coreferential Reasoning Learning for Language Representation

About

Language representation models such as BERT could effectively capture contextual semantic information from plain text, and have been proved to achieve promising results in lots of downstream NLP tasks with appropriate fine-tuning. However, most existing language representation models cannot explicitly handle coreference, which is essential to the coherent understanding of the whole discourse. To address this issue, we present CorefBERT, a novel language representation model that can capture the coreferential relations in context. The experimental results show that, compared with existing baseline models, CorefBERT can achieve significant improvements consistently on various downstream NLP tasks that require coreferential reasoning, while maintaining comparable performance to previous models on other common NLP tasks. The source code and experiment details of this paper can be obtained from https://github.com/thunlp/CorefBERT.

Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, Zhiyuan Liu• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy94.7
416
Document-level Relation ExtractionDocRED (dev)
F1 Score59.93
231
Document-level Relation ExtractionDocRED (test)
F1 Score59.91
179
Relation ExtractionDocRED (test)
F1 Score60.25
121
Relation ExtractionDocRED (dev)
F1 Score59.93
98
Relation ExtractionDocRED v1 (test)
F160.25
66
Relation ExtractionDocRED v1 (dev)
F1 Score59.43
65
Coreference ResolutionGAP (test)
Overall F177.8
53
Document-level Relation ExtractionDocRED 1.0 (test)
F159.91
51
Relation ExtractionDocRED official (test)
RE60.25
45
Showing 10 of 32 rows

Other info

Code

Follow for update