Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ART: Attention Replacement Technique to Improve Factuality in LLMs

About

Hallucination in large language models (LLMs) continues to be a significant issue, particularly in tasks like question answering, where models often generate plausible yet incorrect or irrelevant information. Although various methods have been proposed to mitigate hallucinations, the relationship between attention patterns and hallucinations has not been fully explored. In this paper, we analyze the distribution of attention scores across each layer and attention head of LLMs, revealing a common and intriguing phenomenon: shallow layers of LLMs primarily rely on uniform attention patterns, where the model distributes its attention evenly across the entire sequence. This uniform attention pattern can lead to hallucinations, as the model fails to focus on the most relevant information. To mitigate this issue, we propose a training-free method called Attention Replacement Technique (ART), which replaces these uniform attention patterns in the shallow layers with local attention patterns. This change directs the model to focus more on the relevant contexts, thus reducing hallucinations. Through extensive experiments, ART demonstrates significant reductions in hallucinations across multiple LLM architectures, proving its effectiveness and generalizability without requiring fine-tuning or additional training data.

Ziqin Luo, Yihao Quan, Xiaofeng Zhang, Xiaosong Yuan, Chen Shen• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy85.8
770
Logical reasoningLogiQA (test)
Accuracy63.5
151
Question AnsweringOpenBookQA
Accuracy95.2
119
Commonsense ReasoningCommonsenseQA (test)
Accuracy84.9
62
Question AnsweringTruthfulQA (test)
Accuracy72
25
TruthfulnessTruthfulQA (test)
Accuracy46.4
20
Showing 6 of 6 rows

Other info

Follow for update