Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast Best-of-N Decoding via Speculative Rejection

About

The safe and effective deployment of Large Language Models (LLMs) involves a critical step called alignment, which ensures that the model's responses are in accordance with human preferences. Prevalent alignment techniques, such as DPO, PPO and their variants, align LLMs by changing the pre-trained model weights during a phase called post-training. While predominant, these post-training methods add substantial complexity before LLMs can be deployed. Inference-time alignment methods avoid the complex post-training step and instead bias the generation towards responses that are aligned with human preferences. The best-known inference-time alignment method, called Best-of-N, is as effective as the state-of-the-art post-training procedures. Unfortunately, Best-of-N requires vastly more resources at inference time than standard decoding strategies, which makes it computationally not viable. In this work, we introduce Speculative Rejection, a computationally-viable inference-time alignment algorithm. It generates high-scoring responses according to a given reward model, like Best-of-N does, while being between 16 to 32 times more computationally efficient.

Hanshi Sun, Momin Haider, Ruiqi Zhang, Huitao Yang, Jiahao Qiu, Ming Yin, Mengdi Wang, Peter Bartlett, Andrea Zanette• 2024

Related benchmarks

TaskDatasetResultRank
Multi-turn Instruction FollowingMT-Bench--
44
Reward-oriented DecodingReward-oriented Decoding Evaluation
PPL1.299
28
Instruction FollowingAlpacaFarm Eval (test)
Win Rate73.6
28
Multi-turn Instruction FollowingMT-Bench High-Variance (Top 20%)
Reward Score5.79
26
Instruction FollowingAlpacaEval High-Variance (Top 20%) 2.0
Reward Score7.48
26
Instruction FollowingAlpacaEval 2.0 (Overall)
Reward4.52
26
LLM AlignmentHH-RLHF 300 prompts
Win/Tie Rate vs Vanilla (GPT-4o)50.4
16
Chatbot EvaluationMT-Bench Overall
Human Score7.56
13
Chatbot EvaluationMT-Bench High-Disagreement (Top 20%)
Human Score7.62
13
LLM Alignment EvaluationQwen2.5-14B-Instruct Overall
Reward (Avg μ)6.31
6
Showing 10 of 11 rows

Other info

Code

Follow for update