Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Interpretable Entity Representations through Large-Scale Typing

About

In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model's decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance.

Yasumasa Onoe, Greg Durrett• 2020

Related benchmarks

TaskDatasetResultRank
Ultra-fine Entity TypingUFET (test)
Precision52.8
66
Fine-Grained Entity TypingOntoNotes (test)
Macro F1 Score77.3
27
Fine-Grained Entity TypingFIGER (test)
Macro F179.4
22
Showing 3 of 3 rows

Other info

Follow for update