Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models

About

Recently, diffusion models have emerged as a new paradigm for generative models. Despite the success in domains using continuous signals such as vision and audio, adapting diffusion models to natural language is under-explored due to the discrete nature of texts, especially for conditional generation. We tackle this challenge by proposing DiffuSeq: a diffusion model designed for sequence-to-sequence (Seq2Seq) text generation tasks. Upon extensive evaluation over a wide range of Seq2Seq tasks, we find DiffuSeq achieving comparable or even better performance than six established baselines, including a state-of-the-art model that is based on pre-trained language models. Apart from quality, an intriguing property of DiffuSeq is its high diversity during generation, which is desired in many Seq2Seq tasks. We further include a theoretical analysis revealing the connection between DiffuSeq and autoregressive/non-autoregressive models. Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks. Code is available at \url{https://github.com/Shark-NLP/DiffuSeq}

Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong• 2022

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)
ROUGE-22.3
246
Machine TranslationWMT Ro-En '16
BLEU Score33.08
37
Text SimplificationWikiAuto
BLEU36.22
29
Machine TranslationWMT14 DE-EN
SacreBLEU30.55
24
Machine TranslationIWSLT En-De 14
SacreBLEU28.3
22
Machine TranslationWMT En-De '14
SacreBLEU26.85
22
Paraphrase GenerationQQP (test)
BLEU-239.75
22
Machine TranslationIWSLT14 DE-EN
BLEU Score29.43
22
Paraphrase GenerationQQP
BLEU27.22
19
Seq2SeqQQP
ROUGE-L65.8
18
Showing 10 of 19 rows

Other info

Follow for update