Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SQuAD: 100,000+ Questions for Machine Comprehension of Text

About

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at https://stanford-qa.com

Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang• 2016

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD v1.1 (dev)
F1 Score91
375
Question AnsweringSQuAD v1.1 (test)
F1 Score91.221
260
Question AnsweringSQuAD (test)
F191.2
111
Question AnsweringSQuAD (dev)
F191
74
Question AnsweringSQuAD v1.1 (val)
F1 Score51
70
Machine Reading ComprehensionSQuAD 1.1 (dev)
EM80.3
48
Machine Reading ComprehensionSQuAD 1.1 (test)
EM82.3
46
Question AnsweringSQuAD hidden 1.1 (test)
EM82.3
18
Question AnsweringSQuAD 2.0 Sep 9, 2018 (test)
EM86.9
17
Question AnsweringAddOneSent (test)
EM22.3
15
Showing 10 of 18 rows

Other info

Follow for update