Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

You Only Need One Model for Open-domain Question Answering

About

Recent approaches to Open-domain Question Answering refer to an external knowledge base using a retriever model, optionally rerank passages with a separate reranker model and generate an answer using another reader model. Despite performing related tasks, the models have separate parameters and are weakly-coupled during training. We propose casting the retriever and the reranker as internal passage-wise attention mechanisms applied sequentially within the transformer architecture and feeding computed representations to the reader, with the hidden representations progressively refined at each stage. This allows us to use a single question answering model trained end-to-end, which is a more efficient use of model capacity and also leads to better gradient flow. We present a pre-training method to effectively train this architecture and evaluate our model on the Natural Questions and TriviaQA open datasets. For a fixed parameter budget, our model outperforms the previous state-of-the-art model by 1.0 and 0.7 exact match scores.

Haejun Lee, Akhil Kedia, Jongwon Lee, Ashwin Paranjape, Christopher D. Manning, Kyoung-Gu Woo• 2021

Related benchmarks

TaskDatasetResultRank
Question AnsweringNQ (test)
EM Accuracy53.2
66
Information RetrievalNatural Questions (test)
Recall@2085.2
25
Showing 2 of 2 rows

Other info

Follow for update