Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Fine-Tuning for In-Context Retrieval and Efficient KV-Caching in Long-Context Language Models

About

With context windows of millions of tokens, Long-Context Language Models (LCLMs) can encode entire document collections, offering a strong alternative to conventional retrieval-augmented generation (RAG). However, it remains unclear whether fine-tuning strategies can improve long-context performance and translate to greater robustness under KV-cache compression techniques. In this work, we investigate which training strategies most effectively enhance LCLMs' ability to identify and use relevant information, as well as enhancing their robustness under KV-cache compression. Our experiments show substantial in-domain improvements, achieving gains of up to +20 points over the base model. However, out-of-domain generalization remains task dependent with large variance -- LCLMs excels on finance questions (+9 points), while RAG shows stronger performance on multiple-choice questions (+6 points) over the baseline models. Finally, we show that our fine-tuning approaches bring moderate improvements in robustness under KV-cache compression, with gains varying across tasks.

Francesco Maria Molfese, Momchil Hardalov, Rexhina Blloshmi, Bill Byrne, Adri\`a de Gispert• 2026

Related benchmarks

TaskDatasetResultRank
Long-context evaluation (Financial)Loong Fin
Fin Judge Score58.8
13
Long-context evaluationLB v2 (ALL)
Accuracy (ALL)32.6
13
Long-context language tasks (MC, QA, Sum)∞Bench
MC Accuracy72.5
13
Question AnsweringHELMET RAG subset
HotpotQA Accuracy81.1
8
Showing 4 of 4 rows

Other info

Follow for update