Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

About

Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this Decoding by Contrasting Layers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts. DoLa consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving the performance of LLaMA family models on TruthfulQA by 12-17% absolute points, demonstrating its potential in making LLMs reliably generate truthful facts.

Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James Glass, Pengcheng He• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy80.75
1165
Visual Question AnsweringVizWiz
Accuracy74.38
1043
Mathematical ReasoningGSM8K
Accuracy90.75
983
Visual Question AnsweringGQA
Accuracy73.4
963
Object Hallucination EvaluationPOPE
Accuracy83.1
935
Code GenerationHumanEval
Pass@112.8
850
Multimodal EvaluationMME--
557
Text-based Visual Question AnsweringTextVQA
Accuracy84.19
496
Question AnsweringOpenBookQA
Accuracy49.41
465
Multimodal UnderstandingMM-Vet
MM-Vet Score65.6
418
Showing 10 of 131 rows
...

Other info

Follow for update