Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Retrieval-Feedback-Driven Distillation and Preference Alignment for Efficient LLM-based Query Expansion

About

Large language models have recently enabled a generative paradigm for query expansion, but their high inference cost makes direct deployment difficult in practical retrieval systems. To address this issue, a retrieval-feedback-driven distillation and preference-alignment framework is proposed to transfer retrieval-friendly expansion behavior from a strong teacher model to a compact student model. Rather than relying on few-shot exemplars at inference time, the framework first leverages two complementary types of teacher-generated expansions, produced under zero-shot and few-shot prompting conditions, as supervision signals for distillation and as candidate pools for preference construction. A retrieval-metric-driven strategy is then introduced to automatically form chosen/rejected expansion pairs according to nDCG@10 differences, and Direct Preference Optimization is applied to explicitly align generation preferences with retrieval objectives. Experiments on TREC DL19/20/21 and MIRACL-zh show that the proposed approach preserves strong retrieval effectiveness while substantially reducing inference cost. In particular, the distilled Qwen3-4B model reaches about 97% of the teacher (DeepSeek-685B) model's nDCG@10 performance on DL19, and remains effective on the Chinese MIRACL-zh benchmark, demonstrating strong practicality across both English and Chinese retrieval settings.

Minghan Li, Guodong Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Information RetrievalTREC DL 19
nDCG@1065.36
61
Information RetrievalTREC DL20
NDCG@1059.21
50
Information RetrievalTREC DL21
nDCG@1056.33
7
Information RetrievalMIRACL zh
nDCG@1033.66
7
Dense RetrievalTREC DL19 v1.5
nDCG@1072.26
6
Dense RetrievalTREC DL20 v1.5
nDCG@100.7057
6
Showing 6 of 6 rows

Other info

Follow for update