Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

You only need 4 extra tokens: Synergistic Test-time Adaptation for LLMs

About

Large language models (LLMs) are increasingly deployed in specialized domains such as finance, medicine, and agriculture, where they face significant distribution shifts from their training data. Domain-specific fine-tuning can mitigate this challenge but relies on high-quality labeled data that is expensive and slow to collect in expertise-limited settings. We study label-free test-time adaptation for language models and present SyTTA, an inference-time framework that adapts models on-the-fly without additional supervision. SyTTA couples two complementary uncertainty signals that arise under distribution shift: input-side perplexity, indicating mismatch with domain-specific terminology and patterns, and output-side predictive entropy, indicating diffuse and unstable token probabilities during generation. Across diverse model architectures and domain-specific benchmarks, SyTTA delivers consistent gains. Notably, on agricultural question answering, SyTTA improves Rouge-LSum by over 120% on Qwen-2.5-7B with only 4 extra tokens per query. These results show that effective test-time adaptation for language models is achievable without labeled examples, supporting deployment in label-scarce domains. The code will be made available upon acceptance.

Yijie Xu, Huizai Yao, Zhiyu Guo, Pengteng Li, Aiwei Liu, Xuming Hu, Weiyu Guo, Hui Xiong• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingInstructBench
Dolly (BLEU)75.27
224
Text GenerationDomainBench
BLEU (Agriculture)71.37
144
Instruction FollowingDomainBench
Agriculture Score21.85
80
Showing 3 of 3 rows

Other info

Follow for update