Joint Passage Ranking for Diverse Multi-Answer Retrieval
About
We study multi-answer retrieval, an under-explored problem that requires retrieving passages to cover multiple distinct answers for a given question. This task requires joint modeling of retrieved passages, as models should not repeatedly retrieve passages containing the same answer at the cost of missing a different valid answer. In this paper, we introduce JPR, the first joint passage retrieval model for multi-answer retrieval. JPR makes use of an autoregressive reranker that selects a sequence of passages, each conditioned on previously selected passages. JPR is trained to select passages that cover new answers at each timestep and uses a tree-decoding algorithm to enable flexibility in the degree of diversity. Compared to prior approaches, JPR achieves significantly better answer coverage on three multi-answer datasets. When combined with downstream question answering, the improved retrieval enables larger answer generation models since they need to consider fewer passages, establishing a new state-of-the-art.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-answer Question Answering | AMBIGQA (dev) | F1 (all questions)48.5 | 3 | |
| Multi-answer Question Answering | AMBIGQA (test) | F1 (All Questions)43.5 | 3 | |
| Question Answering | Natural Questions (NQ) single-answer (test) | Exact Match54.5 | 3 | |
| Question Answering | Natural Questions (NQ) single-answer (dev) | Exact Match50.4 | 3 | |
| Multi-answer Question Answering | WEBQSP (dev) | F1 (All Questions)53.6 | 2 | |
| Multi-answer Question Answering | WEBQSP (test) | F1 (All Questions)0.531 | 2 |