Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training With "Paraphrasing the Original Text" Teaches LLM to Better Retrieve in Long-context Tasks

About

As Large Language Models (LLMs) continue to evolve, more are being designed to handle long-context inputs. Despite this advancement, most of them still face challenges in accurately handling long-context tasks, often showing the "lost in the middle" issue. We identify that insufficient retrieval capability is one of the important reasons for this issue. To tackle this challenge, we propose a novel approach to design training data for long-context tasks, aiming at augmenting LLMs' proficiency in extracting key information from long context. Specially, we incorporate an additional part named "paraphrasing the original text" when constructing the answer of training samples and then fine-tuning the model. Experimenting on LongBench and NaturalQuestions Multi-document-QA dataset with models of Llama and Qwen series, our method achieves an improvement of up to 8.48% and 4.48% in average scores, respectively, showing effectiveness in improving the model's performance on long-context tasks.

Yijiong Yu, Yongfeng Huang, Zhixiao Qi, Zhe Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringLongBench (test)
HotpotQA13.28
59
Variable TrackingRULER 8k
F1 Score79.28
12
Variable TrackingRULER 4k
F1 Score81.6
12
Key-Value RetrievalInfiniteBench 4k
Accuracy92
12
Key-Value RetrievalInfiniteBench 8k
Accuracy73
12
Variable TrackingRULER 16k
F1 Score70.56
10
Key-Value RetrievalInfiniteBench 16k
Accuracy (%)50
10
Showing 7 of 7 rows

Other info

Follow for update