Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases

About

Existing studies on question answering on knowledge bases (KBQA) mainly operate with the standard i.i.d assumption, i.e., training distribution over questions is the same as the test distribution. However, i.i.d may be neither reasonably achievable nor desirable on large-scale KBs because 1) true user distribution is hard to capture and 2) randomly sample training examples from the enormous space would be highly data-inefficient. Instead, we suggest that KBQA models should have three levels of built-in generalization: i.i.d, compositional, and zero-shot. To facilitate the development of KBQA models with stronger generalization, we construct and release a new large-scale, high-quality dataset with 64,331 questions, GrailQA, and provide evaluation settings for all three levels of generalization. In addition, we propose a novel BERT-based KBQA model. The combination of our dataset and model enables us to thoroughly examine and demonstrate, for the first time, the key role of pre-trained contextual embeddings like BERT in the generalization of KBQA.

Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, Yu Su• 2020

Related benchmarks

TaskDatasetResultRank
Knowledge Base Question AnsweringWebQSP Freebase (test)
F1 Score70
46
Knowledge Base Question AnsweringWebQSP → GrailQA-Tech (test)
F1 Score35.9
36
Knowledge Base Question AnsweringGrailQA v1.0 (test)
Overall EM50.6
33
Knowledge Base Question AnsweringGrailQA (test)
F144.1
27
Knowledge Base Question AnsweringWebQSP → GraphQA-Pop (test)
F123.4
20
Knowledge Base Question AnsweringGraphQ (test)
F125
19
Knowledge Base Question AnsweringGraphQ
F1 Score27
9
Showing 7 of 7 rows

Other info

Follow for update