Neural Models for Reasoning over Multiple Mentions using Coreference
About
Many problems in NLP require aggregating information from multiple mentions of the same entity which may be far apart in the text. Existing Recurrent Neural Network (RNN) layers are biased towards short-term dependencies and hence not suited to such tasks. We present a recurrent layer which is instead biased towards coreferent dependencies. The layer uses coreference annotations extracted from an external system to connect entity mentions belonging to the same cluster. Incorporating this layer into a state-of-the-art reading comprehension model improves performance on three datasets -- Wikihop, LAMBADA and the bAbi AI tasks -- with large gains when training data is scarce.
Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | Wikihop (test) | Accuracy59.3 | 32 | |
| Reading Comprehension | LAMBADA (test) | Accuracy55.69 | 13 | |
| Multi-hop Reading Comprehension | WikiHop unmasked (dev) | Accuracy61.4 | 11 | |
| Multi-hop Reading Comprehension | WikiHop unmasked (test) | Accuracy59.3 | 9 | |
| Question Answering | Wikihop (dev) | Accuracy56 | 8 | |
| Reading Comprehension | bAbi 1K (test) | Maximum Accuracy88.6 | 7 | |
| Reading Comprehension | Wikihop (dev) | Follow61.4 | 6 | |
| Reading Comprehension | LAMBADA context (test) | Accuracy68.88 | 3 | |
| Reading Comprehension | Wikihop (test) | Overall Score59.3 | 2 |
Showing 9 of 9 rows