Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

xRAG: Extreme Context Compression for Retrieval-augmented Generation with One Token

About

This paper introduces xRAG, an innovative context compression method tailored for retrieval-augmented generation. xRAG reinterprets document embeddings in dense retrieval--traditionally used solely for retrieval--as features from the retrieval modality. By employing a modality fusion methodology, xRAG seamlessly integrates these embeddings into the language model representation space, effectively eliminating the need for their textual counterparts and achieving an extreme compression rate. In xRAG, the only trainable component is the modality bridge, while both the retriever and the language model remain frozen. This design choice allows for the reuse of offline-constructed document embeddings and preserves the plug-and-play nature of retrieval augmentation. Experimental results demonstrate that xRAG achieves an average improvement of over 10% across six knowledge-intensive tasks, adaptable to various language model backbones, ranging from a dense 7B model to an 8x7B Mixture of Experts configuration. xRAG not only significantly outperforms previous context compression methods but also matches the performance of uncompressed models on several datasets, while reducing overall FLOPs by a factor of 3.53. Our work pioneers new directions in retrieval-augmented generation from the perspective of multimodality fusion, and we hope it lays the foundation for future efficient and scalable retrieval-augmented systems

Xin Cheng, Xun Wang, Xingxing Zhang, Tao Ge, Si-Qing Chen, Furu Wei, Huishuai Zhang, Dongyan Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Multi-hop Question Answering2WikiMultiHopQA (test)
EM48.8
195
Question AnsweringSQuAD
F118.19
134
Question AnsweringHotpotQA
F127.51
128
Inference EfficiencyNatural Questions (NQ)--
90
Question AnsweringNatural Questions (test)
EM44.13
72
Long-context ReasoningLongBench v2
Average Score27.43
48
Long-context ReasoningLocomo--
45
Question AnsweringHotpotQA (test)
Ans EM30.4
37
Question AnsweringPopQA longtail
EM33.95
23
Multi-hop Question AnsweringHotpotQA
EM8.45
18
Showing 10 of 24 rows

Other info

Follow for update