Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning

About

Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.

Weihao Yu, Zihang Jiang, Yanfei Dong, Jiashi Feng• 2020

Related benchmarks

TaskDatasetResultRank
Logical reasoningLogiQA (test)
Accuracy86
92
Logical reasoningReClor (test)
Accuracy68.9
87
Logical reasoningReClor (dev)
Accuracy0.744
46
Logical reasoningLogiQA (dev)
Accuracy44.4
40
Logical reasoningReClor Hard (test)
Accuracy87.2
37
Logical reasoningReClor Easy (test)
Accuracy83.4
28
Logical reasoningReClor (test-H)
Accuracy67.2
23
Logical reasoningReClor v1 (test)
Accuracy63
23
Logical reasoningReClor (test-e)
Accuracy57.1
23
Logical reasoningReClor 1.0 (test-H)
Accuracy0.672
13
Showing 10 of 15 rows

Other info

Code

Follow for update