Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

About

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. In this work, we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph neural networks. We evaluate our model on QA benchmarks in the commonsense (CommonsenseQA, OpenBookQA) and biomedical (MedQA-USMLE) domains. QA-GNN outperforms existing LM and LM+KG models, and exhibits capabilities to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.

Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, Jure Leskovec• 2021

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy82.6
1460
Commonsense ReasoningPIQA
Accuracy79.6
647
Commonsense ReasoningCSQA
Accuracy73.4
366
Commonsense ReasoningARC Challenge
Accuracy44.4
132
Question AnsweringOpenBookQA (OBQA) (test)
OBQA Accuracy82.8
130
Commonsense Question AnsweringCSQA (test)
Accuracy0.761
127
Question AnsweringMedQA-USMLE (test)
Accuracy45
101
Question AnsweringPubMedQA (test)
Accuracy72.1
81
Commonsense ReasoningOBQA
Accuracy67.8
75
Question AnsweringMedQA (test)
Accuracy38.1
61
Showing 10 of 44 rows

Other info

Code

Follow for update