Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LooComp: Leverage Leave-One-Out Strategy to Encoder-only Transformer for Efficient Query-aware Context Compression

About

Efficient context compression is crucial for improving the accuracy and scalability of question answering. For the efficiency of Retrieval Augmented Generation, context should be delivered fast, compact, and precise to ensure clue sufficiency and budget-friendly LLM reader cost. We propose a margin-based framework for query-driven context pruning, which identifies sentences that are critical for answering a query by measuring changes in clue richness when they are omitted. The model is trained with a composite ranking loss that enforces large margins for critical sentences while keeping non-critical ones near neutral. Built on a lightweight encoder-only Transformer, our approach generally achieves strong exact-match and F1 scores with high-throughput inference and lower memory requirements than those of major baselines. In addition to efficiency, our method yields effective compression ratios without degrading answering performance, demonstrating its potential as a lightweight and practical alternative for retrieval-augmented tasks.

Thao Do, Dinh Phu Tran, An Vo, Seon Kwon Kim, Daeyoung Kim• 2026

Related benchmarks

TaskDatasetResultRank
Question Answering2Wiki
F152.6
152
Question AnsweringHotpotQA
EM33.6
109
Question AnsweringHQA
EM0.441
55
Question AnsweringAverage of 5 datasets--
46
Question AnsweringMuSiQue
EM10.7
24
Question AnsweringNQ
EM36.3
15
Context Compression for Question AnsweringAggregated NQ, TQA, HQA, 2Wiki, Musique
EM34
8
Showing 7 of 7 rows

Other info

Follow for update