Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoE-DiffuSeq: Enhancing Long-Document Diffusion Models with Sparse Attention and Mixture of Experts

About

We propose \textbf{MoE-DiffuSeq}, a diffusion-based framework for efficient long-form text generation that integrates sparse attention with a Mixture-of-Experts (MoE) architecture. Existing sequence diffusion models suffer from prohibitive computational and memory costs when scaling to long documents, largely due to dense attention and slow iterative reconstruction. MoE-DiffuSeq addresses these limitations by combining expert routing with a tailored sparse attention mechanism, substantially reducing attention complexity while preserving global coherence and textual fidelity. In addition, we introduce a \emph{soft absorbing state} within the diffusion process that reshapes attention dynamics during denoising, enabling faster sequence reconstruction and more precise token refinement. This design accelerates both training and sampling without sacrificing generation quality. Extensive experiments on long-document benchmarks demonstrate that MoE-DiffuSeq consistently outperforms prior diffusion-based and sparse-attention baselines in training efficiency, inference speed, and generation quality. Our approach is particularly effective for long-context applications such as scientific document generation, large-scale code synthesis, and extended dialogue modeling, establishing a scalable and expressive solution for diffusion-based long-form text generation.

Alexandros Christoforos, Chadbourne Davis• 2025

Related benchmarks

TaskDatasetResultRank
Paraphrase DetectionQQP (test)
Accuracy95.3
51
Abstractive SummarizationarXiv
ROUGE-144.41
7
Dialogue GenerationCommonsense Conversation Dataset
BLEU4.9
6
Multi-hop Question AnsweringHotpotQA
Answer EM72.88
3
Showing 4 of 4 rows

Other info

Follow for update