TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA
About
Large Language Models (LLMs) are widely applied in real world scenarios, yet fine-tuning them comes with significant computational and storage costs. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA mitigate these costs; however, the adapted parameters are dependent on the base model and cannot be transferred across different backbones. One way to address this issue is through knowledge distillation, but its effectiveness inherently depends on training data. Recent work such as TransLoRA avoids this by generating synthetic data; nevertheless, this adds complexity since it requires training an additional discriminator model. In this paper, we propose TiTok, a new framework that enables effective LoRA Transplantation through Token-level knowledge transfer. Specifically, TiTok captures task-relevant information through a token-wise contrastive excess between a source model with and without LoRA. This excess highlights informative tokens and enables selective filtering of synthetic data, all without additional models or overhead. Through experiments on three benchmarks across multiple transfer settings, we demonstrate that TiTok is consistently effective, achieving average performance gains of +4~10% compared to baselines overall.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reasoning | BBH | Accuracy62.1 | 672 | |
| Language Understanding | MMLU | MMLU Score56.1 | 98 | |
| Reasoning | MMLU | Accuracy56.1 | 35 | |
| Headline Generation | News Headline | ROUGE-116.1 | 32 | |
| Scholarly Title Generation | Scholarly Title | ROUGE-147.3 | 32 | |
| Scholarly Title Generation | LaMP-5 1.0 (test) | ROUGE-10.481 | 17 | |
| News Headline Generation | LaMP News Headline (test) | ROUGE-115.1 | 9 |