Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data

About

Discrete diffusion models with absorbing processes have shown promise in language modeling. The key quantities to be estimated are the ratios between the marginal probabilities of two transitive states at all timesteps, called the concrete score. In this paper, we reveal that the concrete score in absorbing diffusion can be expressed as conditional probabilities of clean data, multiplied by a time-dependent scalar in an analytic form. Motivated by this finding, we propose reparameterized absorbing discrete diffusion (RADD), a dedicated diffusion model without time-condition that characterizes the time-independent conditional probabilities. Besides its simplicity, RADD can reduce the number of function evaluations (NFEs) by caching the output of the time-independent network when the noisy sample remains unchanged in a sampling interval, which enables sampling acceleration. Built upon the new perspective of conditional distributions, we further unify absorbing discrete diffusion and any-order autoregressive models (AO-ARMs), showing that the upper bound on the negative log-likelihood for the diffusion model can be interpreted as an expected negative log-likelihood for AO-ARMs. Further, our RADD models achieve SOTA performance among diffusion models on 5 zero-shot language modeling benchmarks (measured by perplexity) at the GPT-2 scale. Our code is available at https://github.com/ML-GSAI/RADD.

Jingyang Ou, Shen Nie, Kaiwen Xue, Fengqi Zhu, Jiacheng Sun, Zhenguo Li, Chongxuan Li• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity29.17
2839
Language ModelingPTB
Perplexity75.16
1034
Language ModelingWikiText
PPL28.44
732
Language ModelingLAMBADA--
268
Language ModelingWikiText-103
PPL28.03
189
Language ModelingLAMBADA
Perplexity41.96
150
Language modellingLM1B (test)
Perplexity70.71
130
Language ModelingOpenWebText
Perplexity84
91
Language ModelingLAMBADA (test)--
71
Language ModelingWikitext (test)
Perplexity35.25
62
Showing 10 of 22 rows

Other info

Follow for update