Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Odysseus Navigates the Sirens' Song: Dynamic Focus Decoding for Factual and Diverse Open-Ended Text Generation

About

Large Language Models (LLMs) are increasingly required to generate text that is both factually accurate and diverse across various open-ended applications. However, current stochastic decoding methods struggle to balance such objectives. We introduce Dynamic Focus Decoding (DFD), a novel plug-and-play stochastic approach that resolves this trade-off without requiring additional data, knowledge, or models. DFD adaptively adjusts the decoding focus based on distributional differences across layers, leveraging the modular and hierarchical nature of factual knowledge within LLMs. This dynamic adjustment improves factuality in knowledge-intensive decoding steps and promotes diversity in less knowledge-reliant steps. DFD can be easily integrated with existing decoding methods, enhancing both factuality and diversity with minimal computational overhead. Extensive experiments across seven datasets demonstrate that DFD significantly improves performance, providing a scalable and efficient solution for open-ended text generation.

Wen Luo, Feifan Song, Wei Li, Guangyue Peng, Shaohang Wei, Houfeng Wang• 2025

Related benchmarks

TaskDatasetResultRank
ReasoningStrategyQA (test)
Factuality Acc68.6
28
Commonsense ReasoningCommonGen (test)
Factuality (MAUVE)67.21
8
Document ContinuationWikiText-103 (test)
MAUVE13.96
8
Document ContinuationWikinews (test)
MAUVE24.59
8
Question AnsweringTruthfulQA
Factuality29.62
8
Question AnsweringStrategyQA
Factuality59.6
8
Question AnsweringTruthfulQA
Truth & Info Score45.17
8
Open-ended Text GenerationHalluDial
BERTScore76.81
3
Showing 8 of 8 rows

Other info

Follow for update