Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Models can Learn Rules

About

When prompted with a few examples and intermediate steps, large language models (LLMs) have demonstrated impressive performance in various reasoning tasks. However, prompting methods that rely on implicit knowledge in an LLM often generate incorrect answers when the implicit knowledge is wrong or inconsistent with the task. To tackle this problem, we present Hypotheses-to-Theories (HtT), a framework that learns a rule library for reasoning with LLMs. HtT contains two stages, an induction stage and a deduction stage. In the induction stage, an LLM is first asked to generate and verify rules over a set of training examples. Rules that appear and lead to correct answers sufficiently often are collected to form a rule library. In the deduction stage, the LLM is then prompted to employ the learned rule library to perform reasoning to answer test questions. Experiments on relational reasoning, numerical reasoning and concept learning problems show that HtT improves existing prompting methods, with an absolute gain of 10-30% in accuracy. The learned rules are also transferable to different models and to different forms of the same problem.

Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou, Jian Tang, Dale Schuurmans, Hanjun Dai• 2023

Related benchmarks

TaskDatasetResultRank
Logical reasoningLogical Deduction
Pass@11
18
Logical reasoningLogiQA
Pass@1 Accuracy0.791
18
Deductive ReasoningPrOntoQA
Pass@10.92
18
Deductive ReasoningProofWriter
Pass@188
18
Inductive ReasoningCLUTRR
Pass@140.3
18
First-Order Logic ReasoningLogicNLI
Pass@154
18
First-Order Logic ReasoningFOLIO
Pass@1 Success Rate71
18
Showing 7 of 7 rows

Other info

Follow for update