Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ACR: Adaptive Context Refactoring via Context Refactoring Operators for Multi-Turn Dialogue

About

Large Language Models (LLMs) have shown remarkable performance in multi-turn dialogue. However, in multi-turn dialogue, models still struggle to stay aligned with what has been established earlier, follow dependencies across many turns, and avoid drifting into incorrect facts as the interaction grows longer. Existing approaches primarily focus on extending the context window, introducing external memory, or applying context compression, yet these methods still face limitations such as \textbf{contextual inertia} and \textbf{state drift}. To address these challenges, we propose the \textbf{A}daptive \textbf{C}ontext \textbf{R}efactoring \textbf{(ACR)} Framework, which dynamically monitors and reshapes the interaction history to mitigate contextual inertia and state drift actively. ACR is built on a library of context refactoring operators and a teacher-guided self-evolving training paradigm that learns when to intervene and how to refactor, thereby decoupling context management from the reasoning process. Extensive experiments on multi-turn dialogue demonstrate that our method significantly outperforms existing baselines while reducing token consumption.

Jiawei Shen, Jia Zhu, Hanghui Guo, Weijie Shi, Yue Cui, Qingyu Niu, Guoqing Ma, Yidan Liang, Jingjiang Liu, Yiling Wang, Shimin Di, Jiajie Xu• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringMuSiQue
EM16.67
84
Question Answering2WikiMultihopQA
EM34.32
73
Question AnsweringBamboogle
EM36.36
62
Question AnsweringPopQA
EM36.04
7
Showing 4 of 4 rows

Other info

Follow for update