Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Open-Ended Text Generation via Adaptive Decoding

About

Current language models decode text token by token according to probabilistic distribution, and determining the appropriate candidates for the next token is crucial to ensure generation quality. This study introduces adaptive decoding, a mechanism that dynamically empowers language models to ascertain a sensible candidate set during generation. Specifically, we introduce an entropy-based metric called confidence and conceptualize determining the optimal candidate set as a confidence-increasing process. The rationality of including a token in the candidate set is assessed by leveraging the increment of confidence. Experimental results reveal that our method balances diversity and coherence well. The human evaluation shows that our method can generate human-preferred text. Additionally, our method can potentially improve the reasoning ability of language models.

Wenhong Zhu, Hongkun Hao, Zhiwei He, Yiming Ai, Rui Wang• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringCommonsenseQA
Accuracy83.62
143
Question AnsweringStrategyQA
Accuracy80.99
114
Showing 2 of 2 rows

Other info

Follow for update