Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Stepwise-Enhanced Reasoning Framework for Large Language Models Based on External Subgraph Generation

About

Large Language Models (LLMs) have achieved strong performance across a wide range of natural language processing tasks in recent years, including machine translation, text generation, and question answering. As their applications extend to increasingly complex scenarios, however, LLMs continue to face challenges in tasks that require deep reasoning and logical inference. In particular, models trained on large scale textual corpora may incorporate noisy or irrelevant information during generation, which can lead to incorrect predictions or outputs that are inconsistent with factual knowledge. To address this limitation, we propose a stepwise reasoning enhancement framework for LLMs based on external subgraph generation, termed SGR. The proposed framework dynamically constructs query relevant subgraphs from external knowledge bases and leverages their semantic structure to guide the reasoning process. By performing reasoning in a step by step manner over structured subgraphs, SGR reduces the influence of noisy information and improves reasoning accuracy. Specifically, the framework first generates an external subgraph tailored to the input query, then guides the model to conduct multi step reasoning grounded in the subgraph, and finally integrates multiple reasoning paths to produce the final answer. Experimental results on multiple benchmark datasets demonstrate that SGR consistently outperforms strong baselines, indicating its effectiveness in enhancing the reasoning capabilities of LLMs.

Xin Zhang, Yang Cao, Baoxing Wu, Xinyi Chen, Kai Song, Siying Li• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge Base Question AnsweringGrailQA
Accuracy70.3
21
Question AnsweringWebQSP
Hit@182.6
19
Question AnsweringCWQ
Hits@163.2
8
Showing 3 of 3 rows

Other info

Follow for update