Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient and Robust Question Answering from Minimal Context over Documents

About

Neural models for question answering (QA) over documents have achieved significant performance improvements. Although effective, these models do not scale to large corpora due to their complex modeling of interactions between the document and the question. Moreover, recent work has shown that such models are sensitive to adversarial inputs. In this paper, we study the minimal context required to answer the question, and find that most questions in existing datasets can be answered with a small set of sentences. Inspired by this observation, we propose a simple sentence selector to select the minimal set of sentences to feed into the QA model. Our overall system achieves significant reductions in training (up to 15 times) and inference times (up to 13 times), with accuracy comparable to or better than the state-of-the-art on SQuAD, NewsQA, TriviaQA and SQuAD-Open. Furthermore, our experimental results and analyses show that our approach is more robust to adversarial inputs.

Sewon Min, Victor Zhong, Richard Socher, Caiming Xiong• 2018

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD (test)--
111
Question AnsweringNewsQA (dev)
F1 Score63.2
101
Question AnsweringSQuAD (dev)
F180.6
74
Open-domain Question AnsweringSQUAD Open (test)
Exact Match34.7
39
Open-domain Question AnsweringSQuAD Open-domain 1.1 (test)
Exact Match (EM)32.7
30
Question AnsweringSQuAD-Open
EM34.7
28
Question AnsweringTriviaQA Wiki domain, Verified (dev)
EM63.8
21
Question AnsweringSQuAD-Open (dev)
EM34.7
20
Question AnsweringTriviaQA Wikipedia (dev-full)
F161.3
19
Open-domain Question AnsweringSQuAD
EM34.7
16
Showing 10 of 15 rows

Other info

Follow for update