Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Question Answering with External Knowledge

About

We focus on multiple-choice question answering (QA) tasks in subject areas such as science, where we require both broad background knowledge and the facts from the given subject-area reference corpus. In this work, we explore simple yet effective methods for exploiting two sources of external knowledge for subject-area QA. The first enriches the original subject-area reference corpus with relevant text snippets extracted from an open-domain resource (i.e., Wikipedia) that cover potentially ambiguous concepts in the question and answer options. As in other QA research, the second method simply increases the amount of training data by appending additional in-domain subject-area instances. Experiments on three challenging multiple-choice science QA tasks (i.e., ARC-Easy, ARC-Challenge, and OpenBookQA) demonstrate the effectiveness of our methods: in comparison to the previous state-of-the-art, we obtain absolute gains in accuracy of up to 8.1%, 13.0%, and 12.8%, respectively. While we observe consistent gains when we introduce knowledge from Wikipedia, we find that employing additional QA training instances is not uniformly helpful: performance degrades when the added instances exhibit a higher level of difficulty than the original training data. As one of the first studies on exploiting unstructured external knowledge for subject-area QA, we hope our methods, observations, and discussion of the exposed limitations may shed light on further developments in the area.

Xiaoman Pan, Kai Sun, Dian Yu, Jianshu Chen, Heng Ji, Claire Cardie, Dong Yu• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)93.2
504
Sentiment ClassificationIMDB (test)
Error Rate4.51
144
Machine Reading ComprehensionRACE (test)
RACE Accuracy (Medium)76.6
111
Machine Reading ComprehensionSQuAD 2.0 (dev)
EM78.98
57
Machine Reading ComprehensionSQuAD 2.0 (test)
EM80.005
51
Machine Reading ComprehensionSQuAD 1.1 (dev)
EM84.1
48
Machine Reading ComprehensionSQuAD 1.1 (test)
EM87.433
46
Text ClassificationDBPedia (test)
Test Error Rate0.64
40
Text ClassificationYelp-2 (test)
Error Rate1.89
14
Text ClassificationAmazon-2 (test)
Error Rate0.0263
6
Showing 10 of 13 rows

Other info

Follow for update