Training-Free Test-Time Contrastive Learning for Large Language Models
About
Large language models (LLMs) demonstrate strong reasoning capabilities, but their performance often degrades under distribution shift. Existing test-time adaptation (TTA) methods rely on gradient-based updates that require white-box access and need substantial overhead, while training-free alternatives are either static or depend on external guidance. In this paper, we propose Training-Free Test-Time Contrastive Learning TF-TTCL, a training-free adaptation framework that enables a frozen LLM to improve online by distilling supervision from its own inference experiences. Specifically, TF-TTCL implements a dynamic "Explore-Reflect-Steer" loop through three core modules: 1) Semantic Query Augmentation first diversifies problem views via multi-agent role-playing to generate different reasoning trajectories; 2) Contrastive Experience Distillation then captures the semantic gap between superior and inferior trajectories, distilling them into explicit textual rules; and 3) Contextual Rule Retrieval finally activates these stored rules during inference to dynamically steer the frozen LLM toward robust reasoning patterns while avoiding observed errors. Extensive experiments on closed-ended reasoning tasks and open-ended evaluation tasks demonstrate that TF-TTCL consistently outperforms strong zero-shot baselines and representative TTA methods under online evaluation. Code is available at https://github.com/KevinSCUTer/TF-TTCL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reasoning | GSM8K | -- | 106 | |
| Reasoning | MATH 500 | Accuracy (%)54 | 90 | |
| Mathematical Reasoning | Minerva | Accuracy (Acc)24.63 | 62 | |
| Reasoning | AIME 24 | Accuracy on AIME 2483.33 | 49 | |
| Text Generation | DomainBench Finance | BERTScore0.7235 | 15 | |
| Open-ended generation | Finance | ROUGE-Lsum29.19 | 8 | |
| Closed-ended reasoning | AIME24 | Accuracy0.1333 | 7 | |
| Open-ended evaluation | DomainBench (test) | Geography Score27.98 | 7 | |
| Text Generation | DomainBench Geography | BERTScore0.7082 | 7 | |
| Text Generation | DomainBench Medicine | BERTScore0.701 | 7 |