Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Dense Retrievers with Multiple Positive Passages

About

Modern knowledge-intensive systems, such as retrieval-augmented generation (RAG), rely on effective retrievers to establish the performance ceiling for downstream modules. However, retriever training has been bottlenecked by sparse, single-positive annotations, which lead to false-negative noise and suboptimal supervision. While the advent of large language models (LLMs) makes it feasible to collect comprehensive multi-positive relevance labels at scale, the optimal strategy for incorporating these dense signals into training remains poorly understood. In this paper, we present a systematic study of multi-positive optimization objectives for retriever training. We unify representative objectives, including Joint Likelihood (JointLH), Summed Marginal Likelihood (SumMargLH), and Log-Sum-Exp Pairwise (LSEPair) loss, under a shared contrastive learning framework. Our theoretical analysis characterizes their distinct gradient behaviors, revealing how each allocates probability mass across positive document sets. Empirically, we conduct extensive evaluations on Natural Questions, MS MARCO, and the BEIR benchmark across two realistic regimes: homogeneous LLM-annotated data and heterogeneous mixtures of human and LLM labels. Our results show that LSEPair consistently achieves superior robustness and performance across settings, while JointLH and SumMargLH exhibit high sensitivity to the quality of positives. Furthermore, we find that the simple strategy of random sampling (Rand1LH) serves as a reliable baseline. By aligning theoretical insights with empirical findings, we provide practical design principles for leveraging dense, LLM-augmented supervision to enhance retriever effectiveness.

Benben Wang, Minghao Tang, Hengran Zhang, Jiafeng Guo, Keping Bi• 2026

Related benchmarks

TaskDatasetResultRank
RetrievalMS MARCO (dev)
MRR@100.3057
84
Information RetrievalBEIR (test)
TREC-COVID Score31.1
76
RetrievalNQ (test)
Top-20 Accuracy77.01
11
RetrievalMS MARCO DL19
Recall@100095.1
5
RetrievalMS MARCO DL20
NDCG@100.632
5
RetrievalMS MARCO Hybrid Annotation
MRR@1079.19
5
Showing 6 of 6 rows

Other info

Follow for update