Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reinforced Mnemonic Reader for Machine Reading Comprehension

About

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, Ming Zhou• 2017

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD v1.1 (dev)
F1 Score87.9
375
Question AnsweringSQuAD v1.1 (test)
F1 Score88.5
260
Machine Reading ComprehensionSQuAD 1.1 (dev)
EM81.2
48
Machine Reading ComprehensionSQuAD 1.1 (test)
EM82.3
46
Question AnsweringTriviaQA Wiki domain, Verified (dev)
EM54.5
21
Question AnsweringTriviaQA Wikipedia (dev-full)
F152.9
19
Question AnsweringSQuAD hidden 1.1 (test)
EM77.7
18
Question AnsweringAddOneSent (test)
EM48.7
15
Question Answeringadversarial SQuAD (test)
Add Sent Score46.6
12
Reading ComprehensionAdversarial SQuAD AddSent v1.1 (test)
F146.6
10
Showing 10 of 23 rows

Other info

Code

Follow for update