Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Context Embeddings for Efficient Answer Generation in RAG

About

Retrieval-Augmented Generation (RAG) allows overcoming the limited knowledge of LLMs by extending the input with external information. As a consequence, the contextual inputs to the model become much longer which slows down decoding time directly translating to the time a user has to wait for an answer. We address this challenge by presenting COCOM, an effective context compression method, reducing long contexts to only a handful of Context Embeddings speeding up the generation time by a large margin. Our method allows for different compression rates trading off decoding time for answer quality. Compared to earlier methods, COCOM allows for handling multiple contexts more effectively, significantly reducing decoding time for long inputs. Our method demonstrates a speed-up of up to 5.69 $\times$ while achieving higher performance compared to existing efficient context compression methods.

David Rau, Shuai Wang, Herv\'e D\'ejean, St\'ephane Clinchant• 2024

Related benchmarks

TaskDatasetResultRank
Long-context ReasoningLongBench v2
Average Score27.24
48
Long-context ReasoningLocomo--
25
Question AnsweringNatural Questions
EM31.86
18
Question AnsweringTriviaQA
EM13.75
18
Multi-hop Question AnsweringHotpotQA
EM25.9
18
Question AnsweringPopQA
EM21.72
17
Fact VerificationFactKG
Accuracy60.87
17
Long-context Question AnsweringL-Eval QA
NQ61.47
13
Long-context ReasoningBAMBOO 16k
AltQA Score30.5
13
Long-context SummarizationL-Eval Sum
QMS9.15
13
Showing 10 of 13 rows

Other info

Follow for update