Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Prompt-Induced Hallucinations in Large Language Models via Structured Reasoning

About

To address hallucination issues in large language models (LLMs), this paper proposes a method for mitigating prompt-induced hallucinations. Building on a knowledge distillation chain-style model, we introduce a code module to guide knowledge-graph exploration and incorporate code as part of the chain-of-thought prompt, forming an external knowledge input that provides more accurate and structured information to the model. Based on this design, we develop an improved knowledge distillation chain-style model and leverage it to analyze and constrain the reasoning process of LLMs, thereby improving inference accuracy. We empirically evaluate the proposed approach using GPT-4 and LLaMA-3.3 on multiple public datasets. Experimental results demonstrate that incorporating code modules significantly enhances the model's ability to capture contextual information and effectively mitigates prompt-induced hallucinations. Specifically, HIT@1, HIT@3, and HIT@5 improve by 15.64%, 13.38%, and 13.28%, respectively. Moreover, the proposed method achieves HIT@1, HIT@3, and HIT@5 scores exceeding 95% across several evaluation settings. These results indicate that the proposed approach substantially reduces hallucination behavior while improving the accuracy and verifiability of large language models.

Jinbo Hao, Kai Yang, Qingzhen Su, Yang Chen, Yifan Li, Chao Jiang• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge-based Multi-step ReasoningMean of WebQSP, CWQ, GSM8K, MWP, and Dr. SPIDER (test)
HIT@198.4
10
Knowledge-intensive reasoningGeneralization Verification
Hits@199.18
5
Showing 2 of 2 rows

Other info

Follow for update