An Empirical Analysis of Multiple-Turn Reasoning Strategies in Reading Comprehension Tasks
About
Reading comprehension (RC) is a challenging task that requires synthesis of information across sentences and multiple turns of reasoning. Using a state-of-the-art RC model, we empirically investigate the performance of single-turn and multiple-turn reasoning on the SQuAD and MS MARCO datasets. The RC model is an end-to-end neural network with iterative attention, and uses reinforcement learning to dynamically control the number of turns. We find that multiple-turn reasoning outperforms single-turn reasoning for all question and answer types; further, we observe that enabling a flexible number of turns generally improves upon a fixed multiple-turn strategy. %across all question types, and is particularly beneficial to questions with lengthy, descriptive answers. We achieve results competitive to the state-of-the-art on these two datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | SQuAD v1.1 (dev) | F1 Score82.9 | 375 | |
| Question Answering | SQuAD v1.1 (test) | F1 Score82.6 | 260 | |
| Reading Comprehension | Adversarial SQuAD AddOneSent v1.1 (test) | F1 Score50.3 | 10 | |
| Reading Comprehension | Adversarial SQuAD AddSent v1.1 (test) | F139.4 | 10 | |
| Machine Reading Comprehension | MS MARCO (dev) | ROUGE38.01 | 4 |