Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models

About

This paper primarily focuses on the hallucinations caused due to AI language models(LLMs).LLMs have shown extraordinary Language understanding and generation capabilities .Still it has major a disadvantage hallucinations which give outputs which are factually incorrect ,misleading or unsupported by input data . These hallucinations cause serious problems in scenarios like medical diagnosis or legal reasoning.Through this work,we propose causal graph attention network (GCAN) framework that reduces hallucinations through interpretation of internal attention flow within a transformer architecture with the help of constructing token level graphs that combine self attention weights and gradient based influence scores.our method quantifies each tokens factual dependency using a new metric called the Causal Contribution Score (CCS). We further introduce a fact-anchored graph reweighting layer that dynamically reduces the influence of hallucination prone nodes during generation. Experiments on standard benchmarks such as TruthfulQA and HotpotQA show a 27.8 percent reduction in hallucination rate and 16.4 percent improvement in factual accuracy over baseline retrieval-augmented generation (RAG) models. This work contributes to the interpretability,robustness, and factual reliability of future LLM architectures.

Sailesh kiran kurra, Shiek Ruksana, Vishal Borusu• 2026

Related benchmarks

TaskDatasetResultRank
Factual Question AnsweringTruthfulQA and HotpotQA
Hallucination Rate19.7
3
Showing 1 of 1 rows

Other info

Follow for update