Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Curious Case of Neural Text Degeneration

About

Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators. The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, using likelihood as a decoding objective leads to text that is bland and strangely repetitive. In this paper, we reveal surprising distributional differences between human text and machine text. In addition, we find that decoding strategies alone can dramatically effect the quality of machine text, even when generated from exactly the same neural language model. Our findings motivate Nucleus Sampling, a simple but effective method to draw the best out of neural generation. By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence.

Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi• 2019

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Mathematical ReasoningGSM8K--
351
Hallucination DetectionTriviaQA
AUROC0.723
265
SummarizationXSum (test)
ROUGE-216.57
231
Mathematical ReasoningMATH--
162
Math ReasoningGSM8K (test)
Accuracy79.4
155
Question AnsweringCommonsenseQA
Accuracy83.77
143
Commonsense ReasoningStrategyQA--
125
Question AnsweringStrategyQA
Accuracy74.79
114
SummarizationXsum
ROUGE-223.74
108
Showing 10 of 63 rows

Other info

Follow for update