Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for Accelerated Seq2Seq Diffusion Models

About

Diffusion models have gained prominence in generating high-quality sequences of text. Nevertheless, current approaches predominantly represent discrete text within a continuous diffusion space, which incurs substantial computational overhead during training and results in slower sampling speeds. In this paper, we introduce a soft absorbing state that facilitates the diffusion model in learning to reconstruct discrete mutations based on the underlying Gaussian space, thereby enhancing its capacity to recover conditional signals. During the sampling phase, we employ state-of-the-art ODE solvers within the continuous space to expedite the sampling process. Comprehensive experimental evaluations reveal that our proposed method effectively accelerates the training convergence by 4x and generates samples of similar quality 800x faster, rendering it significantly closer to practical application. \footnote{The code is released at \url{https://github.com/Shark-NLP/DiffuSeq}

Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong• 2023

Related benchmarks

TaskDatasetResultRank
Paraphrase DetectionQQP (test)
Accuracy91.7
51
ParaphrasingQQP
BLEU22.1
22
Seq2Seq generationQQP
BLEU0.2411
17
Abstractive SummarizationarXiv
ROUGE-139.12
7
Dialogue GenerationCommonsense Conversation Dataset
BLEU2.2
6
Multi-hop Question AnsweringHotpotQA
Answer EM70.91
3
Showing 6 of 6 rows

Other info

Follow for update