Few-shot Transfer Learning for Knowledge Base Question Answering: Fusing Supervised Models with In-Context Learning
About
Existing Knowledge Base Question Answering (KBQA) architectures are hungry for annotated data, which make them costly and time-consuming to deploy. We introduce the problem of few-shot transfer learning for KBQA, where the target domain offers only a few labeled examples, but a large labeled training dataset is available in a source domain. We propose a novel KBQA architecture called FuSIC-KBQA that performs KB-retrieval using multiple source-trained retrievers, re-ranks using an LLM and uses this as input for LLM few-shot in-context learning to generate logical forms. These are further refined using execution-guided feedback. Experiments over multiple source-target KBQA pairs of varying complexity show that FuSIC-KBQA significantly outperforms adaptations of SoTA KBQA models for this setting. Additional experiments show that FuSIC-KBQA also outperforms SoTA KBQA models in the in-domain setting when training data is limited.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Knowledge Base Question Answering | WebQSP → GrailQA-Tech (test) | F1 Score74.6 | 36 | |
| Knowledge Base Question Answering | GrailQA (test) | F169.1 | 27 | |
| Knowledge Base Question Answering | WebQSP → GraphQA-Pop (test) | F161.7 | 20 | |
| Knowledge Base Question Answering | GrailQA 500-sample (dev) | F1 Score83.6 | 18 | |
| Knowledge Base Question Answering | MAKG (Microsoft Academic Graph) (test) | F1 Score45.6 | 3 |