Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Retrieval meets Long Context Large Language Models

About

Extending the context window of large language models (LLMs) is getting popular recently, while the solution of augmenting LLMs with retrieval has existed for years. The natural questions are: i) Retrieval-augmentation versus long context window, which one is better for downstream tasks? ii) Can both methods be combined to get the best of both worlds? In this work, we answer these questions by studying both solutions using two state-of-the-art pretrained LLMs, i.e., a proprietary 43B GPT and Llama2-70B. Perhaps surprisingly, we find that LLM with 4K context window using simple retrieval-augmentation at generation can achieve comparable performance to finetuned LLM with 16K context window via positional interpolation on long context tasks, while taking much less computation. More importantly, we demonstrate that retrieval can significantly improve the performance of LLMs regardless of their extended context window sizes. Our best model, retrieval-augmented Llama2-70B with 32K context window, outperforms GPT-3.5-turbo-16k and Davinci003 in terms of average score on nine long context tasks including question answering, query-based summarization, and in-context few-shot learning tasks. It also outperforms its non-retrieval Llama2-70B-32k baseline by a margin, while being much faster at generation. Our study provides general insights on the choice of retrieval-augmentation versus long context extension of LLM for practitioners.

Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, Bryan Catanzaro• 2023

Related benchmarks

TaskDatasetResultRank
Question AnsweringTriviaQA
Accuracy79.9
210
Question AnsweringPopQA
Accuracy41
186
Question AnsweringNQ
Accuracy47.5
108
Question AnsweringNarrativeQA
F1 Score19.12
87
Question AnsweringHotpotQA--
79
Question AnsweringMuSiQue
F1 Score40.38
60
Question AnsweringBioASQ
Accuracy59.1
57
Accurate RetrievalAccurate Retrieval (AR) suite
Convo Score513.4
36
Test-Time LearningTest-Time Learning (TTL) suite
Bank77 Accuracy81
36
Question AnsweringOverall NQ, TriviaQA, BioASQ, PopQA
Accuracy0.581
32
Showing 10 of 18 rows

Other info

Follow for update