Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Models for Reasoning over Multiple Mentions using Coreference

About

Many problems in NLP require aggregating information from multiple mentions of the same entity which may be far apart in the text. Existing Recurrent Neural Network (RNN) layers are biased towards short-term dependencies and hence not suited to such tasks. We present a recurrent layer which is instead biased towards coreferent dependencies. The layer uses coreference annotations extracted from an external system to connect entity mentions belonging to the same cluster. Incorporating this layer into a state-of-the-art reading comprehension model improves performance on three datasets -- Wikihop, LAMBADA and the bAbi AI tasks -- with large gains when training data is scarce.

Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov• 2018

Related benchmarks

TaskDatasetResultRank
Question AnsweringWikihop (test)
Accuracy59.3
32
Reading ComprehensionLAMBADA (test)
Accuracy55.69
13
Multi-hop Reading ComprehensionWikiHop unmasked (dev)
Accuracy61.4
11
Multi-hop Reading ComprehensionWikiHop unmasked (test)
Accuracy59.3
9
Question AnsweringWikihop (dev)
Accuracy56
8
Reading ComprehensionbAbi 1K (test)
Maximum Accuracy88.6
7
Reading ComprehensionWikihop (dev)
Follow61.4
6
Reading ComprehensionLAMBADA context (test)
Accuracy68.88
3
Reading ComprehensionWikihop (test)
Overall Score59.3
2
Showing 9 of 9 rows

Other info

Follow for update