Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning

About

Pre-trained Language Models (PLMs) have shown superior performance on various downstream Natural Language Processing (NLP) tasks. However, conventional pre-training objectives do not explicitly model relational facts in text, which are crucial for textual understanding. To address this issue, we propose a novel contrastive learning framework ERICA to obtain a deep understanding of the entities and their relations in text. Specifically, we define two novel pre-training tasks to better understand entities and relations: (1) the entity discrimination task to distinguish which tail entity can be inferred by the given head entity and relation; (2) the relation discrimination task to distinguish whether two relations are close or not semantically, which involves complex relational reasoning. Experimental results demonstrate that ERICA can improve typical PLMs (BERT and RoBERTa) on several language understanding tasks, including relation extraction, entity typing and question answering, especially under low-resource settings.

Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, Jie Zhou• 2020

Related benchmarks

TaskDatasetResultRank
Document-level Relation ExtractionDocRED (dev)
F1 Score58.8
231
Relation ExtractionTACRED (test)--
194
Document-level Relation ExtractionDocRED (test)
F1 Score59
179
Relation ExtractionDocRED (test)--
121
Relation ExtractionDocRED human-annotated (test)
Micro F159.1
36
Sentence-level Relation ExtractionSemEval (test)
F1 (micro)0.884
24
Relation ExtractionSemEval 2010 (test)
Precision89.6
10
Relation ClusteringNYT24
Accuracy (ACC)71.1
8
Joint Entity and Relation ExtractionWebNLG v1.0 (test)
Precision91.6
8
Entity ClusteringBC5CDR BioCreative V
Accuracy92.3
7
Showing 10 of 10 rows

Other info

Follow for update