Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A First-Order Logic-Based Alternative to Reward Models in RLHF

About

Reinforcement Learning from Human Feedback (RLHF) plays a crucial role in aligning large language models (LLMs) with human values and preferences. However, the quality and stability of the trained reward model largely determine the final alignment performance. Existing approaches such as Proximal Policy Optimization (PPO) rely heavily on reward models to guide LLMs toward human-aligned behaviors. In this work, we propose a logic-similarity-based reward mechanism as an alternative to conventional reward modeling. Instead of relying on heuristic reward estimation, our method leverages formal logical consistency to steer model alignment with human preferences. Since real-world questions can be interpreted from multiple perspectives, to ensure that logic-based reinforcement learning does not cause model collapse, we introduce S-GRPO, a supervised variant of the GRPO framework. S-GRPO incorporates an additional supervised component and jointly optimizes the generation term, KL-divergence regularization, and label-based objective during training. Experimental results demonstrate that S-GRPO consistently outperforms standard supervised fine-tuning (SFT) in both performance and robustness. Furthermore, it extends existing preference-learning frameworks such as GRPO and DPO, offering a more flexible and task-adaptive approach to alignment training. Our code is available at https://github.com/ChunjinJiang/sgrpo.

Chunjin Jian, Xinhua Zhu• 2025

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT EN-DE 2022
COMET2279.13
16
Machine TranslationWMT De-En 2022
BLEU30.18
9
First-Order Logic translationFOLIO (test)
BLEU66
7
Preference AlignmentPKU-SafeRLHF (test)
Win Rate22.82
3
Showing 4 of 4 rows

Other info

Follow for update