Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fact-driven Logical Reasoning for Machine Reading Comprehension

About

Recent years have witnessed an increasing interest in training machines with reasoning ability, which deeply relies on accurately and clearly presented clue forms. The clues are usually modeled as entity-aware knowledge in existing studies. However, those entity-aware clues are primarily focused on commonsense, making them insufficient for tasks that require knowledge of temporary facts or events, particularly in logical reasoning for reading comprehension. To address this challenge, we are motivated to cover both commonsense and temporary knowledge clues hierarchically. Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence, such as the subject-verb-object formed ``facts''. We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions (concepts or actions inside a fact). Experimental results on logical reasoning benchmarks and dialogue modeling datasets show that our approach improves the baselines substantially, and it is general across backbone models. Code is available at \url{https://github.com/ozyyshr/FocalReasoner}.

Siru Ouyang, Zhuosheng Zhang, Hai Zhao• 2021

Related benchmarks

TaskDatasetResultRank
Logical reasoningLogiQA (test)
Accuracy45.8
92
Logical reasoningReClor (test)
Accuracy73.3
87
Logical reasoningLogiQA (val)
Accuracy41.01
50
Logical reasoningReClor (dev)
Accuracy0.786
46
Logical reasoningLogiQA (dev)
Accuracy47.3
40
Logical reasoningReClor Hard (test)
Accuracy63
37
Logical reasoningReClor Easy (test)
Accuracy86.4
28
Logical reasoningReClor v1 (test)
Accuracy58.9
23
Logical reasoningReClor (test-e)
Accuracy77.05
23
Logical reasoningReClor (test-H)
Accuracy44.64
23
Showing 10 of 18 rows

Other info

Code

Follow for update