DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
About
Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at \url{https://github.com/Shark-NLP/DiffuSeq}
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Summarization | XSum (test) | ROUGE-22.3 | 246 | |
| Machine Translation | WMT Ro-En '16 | BLEU Score33.08 | 37 | |
| Text Simplification | WikiAuto | BLEU36.22 | 29 | |
| Machine Translation | WMT14 DE-EN | SacreBLEU30.55 | 24 | |
| Machine Translation | IWSLT En-De 14 | SacreBLEU28.3 | 22 | |
| Machine Translation | WMT En-De '14 | SacreBLEU26.85 | 22 | |
| Paraphrase Generation | QQP (test) | BLEU-239.75 | 22 | |
| Machine Translation | IWSLT14 DE-EN | BLEU Score29.43 | 22 | |
| Paraphrase Generation | QQP | BLEU27.22 | 19 | |
| Seq2Seq | QQP | ROUGE-L65.8 | 18 |