Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback

About

Large language models (LLMs) demonstrate impressive performance but lack the flexibility to adapt to human preferences quickly without retraining. In this work, we introduce Test-time Preference Optimization (TPO), a framework that aligns LLM outputs with human preferences during inference, removing the need to update model parameters. Rather than relying on purely numerical rewards, TPO translates reward signals into textual critiques and uses them as textual rewards to iteratively refine its response. Evaluations on benchmarks covering instruction following, preference alignment, safety, and mathematics reveal that TPO progressively improves alignment with human preferences. Notably, after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO scales efficiently with both the search width and depth during inference. Through case studies, we illustrate how TPO exploits the innate capacity of LLM to interpret and act upon reward signals. Our findings establish TPO as a practical, lightweight alternative for test-time preference optimization, achieving alignment on the fly. Our code is publicly available at https://github.com/yafuly/TPO.

Yafu Li, Xuyang Hu, Xiaoye Qu, Linjie Li, Yu Cheng• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Accuracy77.6
442
Mathematical ReasoningAMC
Accuracy55.9
221
Mathematical ReasoningAIME24
Accuracy6.7
160
Machine TranslationWMT literary translation (zh→ru) 24
SEGALE_comet92.63
13
Machine TranslationWMT literary translation (zh→en) 24
SEGALE Comet Score88.81
13
Machine TranslationWMT literary translation (zh→de) 24
SEGALE-COMET Score87.67
13
Showing 6 of 6 rows

Other info

Follow for update