Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Matching the Blanks: Distributional Similarity for Relation Learning

About

General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris' distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task's training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED.

Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski• 2019

Related benchmarks

TaskDatasetResultRank
Relation ExtractionTACRED (test)
F1 Score71.5
194
Relation ClassificationSemEval-2010 Task 8 (test)
F1 Score89.2
128
Relation ExtractionTACRED
Micro F171.5
97
Relation ExtractionSemEval
Micro-F189.5
63
Multi-modal Relation ExtractionMNRE (test)
F1 Score60.96
59
Relation ExtractionSemEval (test)
Micro F189.5
55
Relation ExtractionWiki80
Accuracy0.916
51
Relation ExtractionChemProt
Micro F179.8
40
Relation ExtractionTACRED v1.0 (test)
F1 Score90.5
37
Few-shot Relation ExtractionFewRel Biomedical domain 2.0 (test)
Accuracy87.9
36
Showing 10 of 34 rows

Other info

Follow for update