Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Test-Time Policy Adaptation for Enhanced Multi-Turn Interactions with LLMs

About

Large Language Models (LLMs) employ multi-turn interaction as a fundamental paradigm for completing complex tasks. However, their performance often degrades in extended interactions, as they are typically trained on static, single-turn data, which hinders their ability to adapt to real-time user feedback. To address this limitation, we first propose a new paradigm: Test-Time Policy Adaptation for Multi-Turn Interactions (T2PAM), which utilizes user feedback from the ongoing interaction as a reward signal to estimate a latent optimal policy aligned with user preferences, then updates a small subset of parameters to steer the model toward this policy, ultimately enabling efficient in-conversation self-correction. We then introduce Optimum-Referenced One-Step Adaptation (ROSA), a lightweight algorithm that operationalizes T2PAM. ROSA guides the model parameters toward a theoretical optimal policy in a single, efficient update step, avoiding costly iterative gradient-based optimization and minimizing computational overhead. We provide a rigorous theoretical analysis guaranteeing that the policy of ROSA converges to the preference of user as the number of interactions increases. Extensive experiments on challenging benchmark demonstrate that ROSA achieves significant improvements in both task effectiveness and efficiency.

Chenxing Wei, Hong Wang, Ying He, Fei Yu, Yao Shu• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
1036
Mathematical ReasoningMATH 500
Accuracy72.8
442
Mathematical ReasoningMATH
Accuracy67.4
338
General ReasoningSuper GPQA
Accuracy47.8
89
Multilingual Mathematical ReasoningMT Math100
Accuracy88.4
64
Mathematical ReasoningAIME 25
Accuracy36.67
54
General ReasoningMMLU-R
Accuracy (MMLU-R General Reasoning)75.8
40
Multilingual ReasoningMT-AIME 24
Accuracy (%)43.93
40
General ReasoningGPQA Diamond
Avg@8 Accuracy75.18
34
Multilingual ReasoningM-IMO
Accuracy39.16
20
Showing 10 of 18 rows

Other info

Follow for update