Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PretrainRL: Alleviating Factuality Hallucination of Large Language Models at the Beginning

About

Large language models (LLMs), despite their powerful capabilities, suffer from factual hallucinations where they generate verifiable falsehoods. We identify a root of this issue: the imbalanced data distribution in the pretraining corpus, which leads to a state of "low-probability truth" and "high-probability falsehood". Recent approaches, such as teaching models to say "I don't know" or post-hoc knowledge editing, either evade the problem or face catastrophic forgetting. To address this issue from its root, we propose \textbf{PretrainRL}, a novel framework that integrates reinforcement learning into the pretraining phase to consolidate factual knowledge. The core principle of PretrainRL is "\textbf{debiasing then learning}." It actively reshapes the model's probability distribution by down-weighting high-probability falsehoods, thereby making "room" for low-probability truths to be learned effectively. To enable this, we design an efficient negative sampling strategy to discover these high-probability falsehoods and introduce novel metrics to evaluate the model's probabilistic state concerning factual knowledge. Extensive experiments on three public benchmarks demonstrate that PretrainRL significantly alleviates factual hallucinations and outperforms state-of-the-art methods.

Langming Liu, Kangtao Lv, Haibin Chen, Weidong Zhang, Yejing Wang, Shilei Liu, Xin Tong, Yujin Yuan, Yongwei Wang, Wenbo Su, Bo Zheng• 2026

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy52.01
842
ReasoningBBH
Accuracy66.07
507
Mathematical Problem SolvingMATH
Accuracy17.22
166
Factual Knowledge EvaluationPopQA
Accuracy0.5016
32
Factual Knowledge EvaluationWikidata knowledge infusion
Accuracy64.69
18
Grade School Math Word ProblemsGSM8K
Accuracy0.8097
9
Language UnderstandingCEval
Accuracy49.06
8
Showing 7 of 7 rows

Other info

Follow for update