Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HPS: Hard Preference Sampling for Human Preference Alignment

About

Aligning Large Language Model (LLM) responses with human preferences is vital for building safe and controllable AI systems. While preference optimization methods based on Plackett-Luce (PL) and Bradley-Terry (BT) models have shown promise, they face challenges such as poor handling of harmful content, inefficient use of dispreferred responses, and, specifically for PL, high computational costs. To address these issues, we propose Hard Preference Sampling (HPS), a novel framework for robust and efficient human preference alignment. HPS introduces a training loss that prioritizes the most preferred response while rejecting all dispreferred and harmful ones. It emphasizes "hard" dispreferred responses -- those closely resembling preferred ones -- to enhance the model's rejection capabilities. By leveraging a single-sample Monte Carlo sampling strategy, HPS reduces computational overhead while maintaining alignment quality. Theoretically, HPS improves sample efficiency over existing PL methods and maximizes the reward margin between preferred and dispreferred responses, ensuring clearer distinctions. Experiments on HH-RLHF and PKU-Safety datasets validate HPS's effectiveness, achieving comparable BLEU and reward scores while greatly improving reward margins and thus reducing harmful content generation.

Xiandong Zou, Wanyu Lin, Yuchen Li, Pan Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Preference AlignmentHH-RLHF
BLEU0.275
31
Human Preference AlignmentPKU-SafeRLHF
BLEU0.314
31
Human Preference AlignmentUser study dataset HH-RLHF and PKU-SafeRLHF prompts (test)
Quality Score3.93
4
Human Preference AlignmentPKU-Safety
Win Rate67.1
3
Showing 4 of 4 rows

Other info

Follow for update