Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies

About

Autoregressive language models are trained by minimizing the cross-entropy of the model distribution Q relative to the data distribution P -- that is, minimizing the forward cross-entropy, which is equivalent to maximum likelihood estimation (MLE). We have observed that models trained in this way may "over-generalize", in the sense that they produce non-human-like text. Moreover, we believe that reverse cross-entropy, i.e., the cross-entropy of P relative to Q, is a better reflection of how a human would evaluate text generated by a model. Hence, we propose learning with MixCE, an objective that mixes the forward and reverse cross-entropies. We evaluate models trained with this objective on synthetic data settings (where P is known) and real data, and show that the resulting models yield better generated text without complex decoding strategies. Our code and models are publicly available at https://github.com/bloomberg/mixce-acl2023

Shiyue Zhang, Shijie Wu, Ozan Irsoy, Steven Lu, Mohit Bansal, Mark Dredze, David Rosenberg• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikitext (test)
Perplexity23.44
52
Bigram Language ModelingSynthetic Random 50% initialization (val)
Avg JS Divergence7.02e-4
30
Bigram Language ModelingSynthetic WebText initialization (val)
Avg JS0.001
30
Language ModelingWebText (test)
Diversity (Div)0.85
14
Language ModelingWritingPrompts (test)
Diversity (div)86
14
Open-ended Text GenerationWebText (test)
Same Preference Count97
2
Open-ended Text GenerationWritingPrompts (test)
Same Count85
2
Showing 7 of 7 rows

Other info

Code

Follow for update