Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mutual-Taught for Co-adapting Policy and Reward Models

About

During the preference optimization of large language models (LLMs), distribution shifts may arise between newly generated model samples and the data used to train the reward model (RM). This shift reduces the efficacy of the RM, which in turn negatively impacts the performance of the policy model (PM). To address this challenge, we propose Mutual-Taught, a self-training method that iteratively improves both the PM and RM without requiring additional human annotation. Our approach mirrors the expectation-maximization (EM) algorithm. In the E-step, the PM is updated using feedback from the current RM, guiding the PM toward a better approximation of the latent optimal preference distribution. In the M-step, we update the RM by constructing training data from the outputs of the PM before and after the E-step update. This process ensures that the RM adapts to the evolving policy distribution. Experimental results demonstrate that this iterative approach leads to consistent improvements in both models. Specifically, our 8B policy model, LLaMA-3-8B-Instruct-MT, achieves a length-controlled win rate of 54.1\% on AlpacaEval-2, while our 8B reward model, FsfairX-LLaMA3-RM-MT, performs on par with GPT-4o-2024-08-06 on RewardBench.

Tianyuan Shi, Canbin Huang, Fanqi Wan, Longguang Zhong, Ziyi Yang, Weizhou Shen, Xiaojun Quan, Ming Yan• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy81.37
1460
Multi-task Language UnderstandingMMLU
Accuracy64.13
842
Instruction FollowingAlpacaEval 2.0
LC Win Rate54.1
281
Instruction FollowingArena Hard
Win Rate38.4
77
TruthfulnessTruthfulQA
Truthfulness Accuracy55.21
14
Reward ModelingRewardBench Out-of-distribution (OOD) evaluation
Chat98.3
4
Showing 6 of 6 rows

Other info

Code

Follow for update