Query2doc: Query Expansion with Large Language Models
About
This paper introduces a simple yet effective query expansion approach, denoted as query2doc, to improve both sparse and dense retrieval systems. The proposed method first generates pseudo-documents by few-shot prompting large language models (LLMs), and then expands the query with generated pseudo-documents. LLMs are trained on web-scale text corpora and are adept at knowledge memorization. The pseudo-documents from LLMs often contain highly relevant information that can aid in query disambiguation and guide the retrievers. Experimental results demonstrate that query2doc boosts the performance of BM25 by 3% to 15% on ad-hoc IR datasets, such as MS-MARCO and TREC DL, without any model fine-tuning. Furthermore, our method also benefits state-of-the-art dense retrievers in terms of both in-domain and out-of-domain results.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-hop Question Answering | 2WikiMultihopQA | EM38 | 278 | |
| Multi-hop Question Answering | HotpotQA | F1 Score67.65 | 221 | |
| Multi-hop Question Answering | MuSiQue | EM22 | 106 | |
| Information Retrieval | BEIR v1.0.0 (test) | -- | 55 | |
| Tool Calling | API-Bank L-1 | -- | 46 | |
| Medical Question Answering | Medical QA Evaluation Suite (MedQA, MedMCQA, MMLU-Med, PubMedQA, BioASQ, SEER, DDXPlus, MIMIC-IV) | MedQA Score62.92 | 27 | |
| Question Answering | NaturalQA | EM36.87 | 26 | |
| Retrieval | Bridge (test) | Hit@1071 | 25 | |
| Tool Calling | API-Bank L-2 | -- | 25 | |
| Question Answering | WebQA | EM26.03 | 23 |