Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Aligning Language Models from User Interactions

About

Multi-turn user interactions are among the most abundant data produced by language models, yet we lack effective methods to learn from them. While typically discarded, these interactions often contain useful information: follow-up user messages may indicate that a response was incorrect, failed to follow an instruction, or did not align with the user's preferences. Importantly, language models are already able to make use of this information in context. After observing a user's follow-up, the same model is often able to revise its behavior. We leverage this ability to propose a principled and scalable method for learning directly from user interactions through self-distillation. By conditioning the model on the user's follow-up message and comparing the resulting token distribution with the original policy, we obtain a target for updating the policy that captures how the model's behavior changes in hindsight. We then distill this hindsight distribution back into the current policy. Remarkably, we show that training on real-world user conversations from WildChat improves language models across standard alignment and instruction-following benchmarks, without regressing other capabilities. The same mechanism enables personalization, allowing models to continually adapt to individual users through interaction without explicit feedback. Our results demonstrate that raw user interactions that arise naturally during deployment enable alignment, personalization, and continual adaptation.

Thomas Kleine Buening, Jonas H\"ubotter, Barna P\'asztor, Idan Shenfeld, Giorgia Ramponi, Andreas Krause• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Commonsense ReasoningHellaSwag
HellaSwag Accuracy57.1
350
LLM Alignment EvaluationAlpacaEval 2
LC Win Rate51.9
86
Commonsense Question AnsweringCommonsenseQA
Accuracy78.71
83
Question AnsweringTruthfulQA MC1
MC1 Accuracy36.47
54
Knowledge ReasoningMMLU-Pro--
40
General Reasoning and Creative WritingArenaHard V2
Hard Prompt Score15.5
8
Showing 7 of 7 rows

Other info

Follow for update