Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Alleviating Hallucinations of Large Language Models through Induced Hallucinations

About

Despite their impressive capabilities, large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information, a phenomenon commonly known as ``hallucination''. In this work, we propose a simple \textit{Induce-then-Contrast} Decoding (ICD) strategy to alleviate hallucinations. We first construct a factually weak LLM by inducing hallucinations from the original LLMs. Then, we penalize these induced hallucinations during decoding to enhance the factuality of the generated content. Concretely, we determine the final next-token predictions by amplifying the predictions from the original model and downplaying the induced untruthful predictions via contrastive decoding. Experimental results on both discrimination-based and generation-based hallucination evaluation benchmarks, such as TruthfulQA and \textsc{FActScore}, demonstrate that our proposed ICD methods can effectively enhance the factuality of LLMs across various model sizes and families. For example, when equipped with ICD, Llama2-7B-Chat and Mistral-7B-Instruct achieve performance comparable to ChatGPT and GPT4 on TruthfulQA, respectively.

Yue Zhang, Leyang Cui, Wei Bi, Shuming Shi• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy46.9
1043
Object Hallucination EvaluationPOPE--
935
Multiple-ChoiceTruthfulQA
MC1 Accuracy46.32
83
Question AnsweringTruthfulQA--
73
Multimodal ConversationLLaVA-Bench Wild
Score69.7
52
Truthfulness EvaluationTruthfulQA (test)
MC137.87
30
Object Hallucination EvaluationMSCOCO CHAIR
CHAIR_S47.7
27
Visual Question AnsweringMM-Vet
MM-Vet ASR Accuracy30.4
27
Visual Question AnsweringScienceQA (SQA)
SQA Accuracy62.8
27
Question AnsweringTruthfulQA MC1
MC1 Accuracy46.32
24
Showing 10 of 11 rows

Other info

Follow for update