Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

About

Recent work has investigated the interesting question using pre-trained language models (PLMs) as knowledge bases for answering open questions. However, existing work is limited in using small benchmarks with high test-train overlaps. We construct a new dataset of closed-book QA using SQuAD, and investigate the performance of BART. Experiments show that it is challenging for BART to remember training facts in high precision, and also challenging to answer closed-book questions even if relevant knowledge is retained. Some promising directions are found, including decoupling the knowledge memorizing process and the QA finetune process, forcing the model to recall relevant knowledge when question answering.

Cunxiang Wang, Pai Liu, Yue Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Closed-book Question AnsweringTriviaQA TQ (test)--
9
Closed-book Question AnsweringNaturalQuestions (test)--
9
Question AnsweringSQuAD 20 passages subset 1.1 (test)
RA0.873
5
Question AnsweringSQuAD 160 passages subset 1.1 (test)
RA79.6
5
Question AnsweringSQuAD 547 passages 1.1 (test)
RA66.3
5
Closed-book Question AnsweringSQuAD adaptation 2 (test)
EM1.8
2
Closed-book Question AnsweringWebQuestions WB (test)--
1
Showing 7 of 7 rows

Other info

Code

Follow for update