Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rule-Guided Feedback: Enhancing Reasoning by Enforcing Rule Adherence in Large Language Models

About

In this paper, we introduce Rule-Guided Feedback (RGF), a framework designed to enhance Large Language Model (LLM) performance through structured rule adherence and strategic information seeking. RGF implements a teacher-student paradigm where rule-following is forced through established guidelines. Our framework employs a Teacher model that rigorously evaluates each student output against task-specific rules, providing constructive guidance rather than direct answers when detecting deviations. This iterative feedback loop serves two crucial purposes: maintaining solutions within defined constraints and encouraging proactive information seeking to resolve uncertainties. We evaluate RGF on diverse tasks including Checkmate-in-One puzzles, Sonnet Writing, Penguins-In-a-Table classification, GSM8k, and StrategyQA. Our findings suggest that structured feedback mechanisms can significantly enhance LLMs' performance across various domains.

Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, Rob Miller• 2025

Related benchmarks

TaskDatasetResultRank
Logical reasoningLogical Deduction
Pass@11
18
Logical reasoningLogiQA
Pass@1 Accuracy0.791
18
Deductive ReasoningPrOntoQA
Pass@10.94
18
First-Order Logic ReasoningFOLIO
Pass@1 Success Rate74
18
First-Order Logic ReasoningLogicNLI
Pass@155
18
Deductive ReasoningProofWriter
Pass@188
18
Inductive ReasoningCLUTRR
Pass@131.3
18
Showing 7 of 7 rows

Other info

Follow for update