Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dual Reasoning: A GNN-LLM Collaborative Framework for Knowledge Graph Question Answering

About

Large Language Models (LLMs) excel at intuitive, implicit reasoning. Guiding LLMs to construct thought chains can enhance their deliberate reasoning abilities, but also faces challenges such as hallucination. Knowledge Graphs (KGs) can provide explicit structured knowledge for LLMs to alleviate these issues. However, existing KG-enhanced methods often overlook explicit graph learning, making it challenging to efficiently provide precise reasoning chains for LLMs. Following dual-process theory, we propose Dual-Reasoning (DualR), a novel framework that integrates an external system based on Graph Neural Network (GNN) for explicit reasoning on KGs, complementing the implicit reasoning of LLMs through externalized reasoning chains. DualR designs an LLM-empowered GNN module for explicit learning on KGs, efficiently extracting high-quality reasoning chains. These reasoning chains are then refined to a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to reason thoughtfully for final answer determination. Extensive experiments on three benchmark KGQA datasets demonstrate that DualR achieves state-of-the-art performance while maintaining high efficiency and interpretability.

Guangyi Liu, Yongqi Zhang, Yong Li, Quanming Yao• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Graph Question AnsweringCWQ (test)
Hits@165.3
100
Knowledge Graph Question AnsweringWEBQSP (test)
Hit81.5
61
Showing 2 of 2 rows

Other info

Follow for update