Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evaluating Language Model Context Windows: A "Working Memory" Test and Inference-time Correction

About

Large language models are prominently used in real-world applications, often tasked with reasoning over large volumes of documents. An exciting development in this space is models boasting extended context capabilities, with some accommodating over 2 million tokens. Such long context model capabilities remain uncertain in production systems, motivating the need to benchmark their performance on real world use cases. We address this challenge by proposing SWiM, an evaluation framework that addresses the limitations of standard tests. Testing the framework on eight long context models, we find that even strong models such as GPT-4 and Claude 3 Opus degrade in performance when information is present in the middle of the context window (lost-in-the-middle effect). Next, in addition to our benchmark, we propose medoid voting, a simple, but effective training-free approach that helps alleviate this effect, by generating responses a few times, each time randomly permuting documents in the context, and selecting the medoid answer. We evaluate medoid voting on single document QA tasks, achieving up to a 24% lift in accuracy. Our code is available at https://github.com/snorkel-ai/long-context-eval.

Amanda Dsouza, Christopher Glaze, Changho Shin, Frederic Sala• 2024

Related benchmarks

TaskDatasetResultRank
Long-context Question AnsweringLongBench (test)
HotpotQA8.31
59
Key-Value RetrievalInfiniteBench 4k
Accuracy93
12
Key-Value RetrievalInfiniteBench 8k
Accuracy76
12
Variable TrackingRULER 4k
F1 Score0.00e+0
12
Variable TrackingRULER 8k
F1 Score0.00e+0
12
Key-Value RetrievalInfiniteBench 16k
Accuracy (%)54
10
Variable TrackingRULER 16k
F1 Score0.00e+0
10
Showing 7 of 7 rows

Other info

Follow for update