Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
About
To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR). However, there still remains a large discrepancy between the provided upstream signals and the downstream question-passage relevance, which leads to less improvement. To bridge this gap, we propose the HyperLink-induced Pre-training (HLP), a method to pre-train the dense retriever with the text relevance induced by hyperlink-based topology within Web documents. We demonstrate that the hyperlink-based structures of dual-link and co-mention can provide effective relevance signals for large-scale pre-training that better facilitate downstream passage retrieval. We investigate the effectiveness of our approach across a wide range of open-domain QA datasets under zero-shot, few-shot, multi-hop, and out-of-domain scenarios. The experiments show our HLP outperforms the BM25 by up to 7 points as well as other pre-training methods by more than 10 points in terms of top-20 retrieval accuracy under the zero-shot scenario. Furthermore, HLP significantly outperforms other pre-training methods under the other scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Passage retrieval | TriviaQA (test) | Top-100 Acc86.9 | 67 | |
| Retrieval | Natural Questions (test) | Top-5 Recall70.9 | 62 | |
| Passage retrieval | WebQuestions (WQ) (test) | Top-20 Accuracy76.5 | 37 | |
| Retrieval | MS Marco | -- | 20 | |
| Multi-hop Passage Retrieval | HotpotQA (dev) | Top-594.4 | 10 | |
| Retrieval | BioASQ (test) | Top-2046 | 9 |