Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On a Connection Between Imitation Learning and RLHF

About

This work studies the alignment of large language models with preference data from an imitation learning perspective. We establish a close theoretical connection between reinforcement learning from human feedback RLHF and imitation learning (IL), revealing that RLHF implicitly performs imitation learning on the preference data distribution. Building on this connection, we propose DIL, a principled framework that directly optimizes the imitation learning objective. DIL provides a unified imitation learning perspective on alignment, encompassing existing alignment algorithms as special cases while naturally introducing new variants. By bridging IL and RLHF, DIL offers new insights into alignment with RLHF. Extensive experiments demonstrate that DIL outperforms existing methods on various challenging benchmarks.

Teng Xiao, Yige Yuan, Mingxiao Li, Zhengyu Chen, Vasant G Honavar• 2025

Related benchmarks

TaskDatasetResultRank
PersonalizationCommunity Alignment (CA)
Personalization Win-Rate84.17
45
PersonalizationMulti-Bench (MB)
Win Rate90.48
45
PersonalizationPRISM
Personalization Win Rate78.52
45
Question AnsweringARC-Challenge 0-shot (test)
Accuracy90
39
Mathematical ReasoningSVAMP 8-shot (test)
Accuracy92
25
Multiple-choice Question AnsweringMMLU zero-shot (test)
Accuracy (MMLU zero-shot)76
25
Mathematical ReasoningGSM8K 8-shot (test)
Accuracy92.5
25
Multiple-choice Question AnsweringARC-Easy zero-shot (test)
Accuracy93.6
25
Showing 8 of 8 rows

Other info

Follow for update