Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks

About

Reading comprehension (RC) is a challenging task that requires synthesis of information across sentences and multiple turns of reasoning. Using a state-of-the-art RC model, we empirically investigate the performance of single-turn and multiple-turn reasoning on the SQuAD and MS MARCO datasets. The RC model is an end-to-end neural network with iterative attention, and uses reinforcement learning to dynamically control the number of turns. We find that multiple-turn reasoning outperforms single-turn reasoning for all question and answer types; further, we observe that enabling a flexible number of turns generally improves upon a fixed multiple-turn strategy. %across all question types, and is particularly beneficial to questions with lengthy, descriptive answers. We achieve results competitive to the state-of-the-art on these two datasets.

Yelong Shen, Xiaodong Liu, Kevin Duh, Jianfeng Gao• 2017

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD v1.1 (dev)
F1 Score82.9
375
Question AnsweringSQuAD v1.1 (test)
F1 Score82.6
260
Reading ComprehensionAdversarial SQuAD AddOneSent v1.1 (test)
F1 Score50.3
10
Reading ComprehensionAdversarial SQuAD AddSent v1.1 (test)
F139.4
10
Machine Reading ComprehensionMS MARCO (dev)
ROUGE38.01
4
Showing 5 of 5 rows

Other info

Follow for update