Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AdaBoN: Adaptive Best-of-N Alignment

About

Recent advances in test-time alignment methods, such as Best-of-N sampling, offer a simple and effective way to steer language models (LMs) toward preferred behaviors using reward models (RM). However, these approaches can be computationally expensive, especially when applied uniformly across prompts without accounting for differences in alignment difficulty. In this work, we propose a prompt-adaptive strategy for Best-of-N alignment that allocates inference-time compute more efficiently. Motivated by latency concerns, we develop a two-stage algorithm: an initial exploratory phase estimates the reward distribution for each prompt using a small exploration budget, and a second stage adaptively allocates the remaining budget using these estimates. Our method is simple, practical, and compatible with any LM-RM combination. Empirical results on prompts from the AlpacaEval, HH-RLHF, and PKU-SafeRLHF datasets for 12 LM/RM pairs and 50 different batches of prompts show that our adaptive strategy outperforms the uniform allocation with the same inference budget. Moreover, we show that our adaptive strategy remains competitive against uniform allocations with 20 percent larger inference budgets and improves in performance as the batch size grows.

Vinod Raman, Hilal Asi, Satyen Kale• 2025

Related benchmarks

TaskDatasetResultRank
AlignmentHH-RLHF
Estimated Score (EST)154
12
Best-of-N AlignmentPKU-SafeRLHF
Percent batches with BWR > 0.5038
12
Best-of-N AlignmentHH-RLHF
BWR53
12
Best-of-N AlignmentHH-RLHF (test)
Percent batches with BWR > 0.5098
12
Best-of-N AlignmentAlpacaEval (test)
BWR62
12
Best-of-N Alignment EvaluationAlpacaEval (main)
Expected Survival Time (EST)153
12
LLM AlignmentPKU-SafeRLHF
BWR (Median)49
12
LLM AlignmentAlpacaEval
Percent Batches (BWR > 0.50)100
12
Preference AlignmentAlpacaEval
Win Rate52
12
Showing 9 of 9 rows

Other info

Follow for update