MoE-DiffuSeq: Enhancing Long-Document Diffusion Models with Sparse Attention and Mixture of Experts
About
We propose \textbf{MoE-DiffuSeq}, a diffusion-based framework for efficient long-form text generation that integrates sparse attention with a Mixture-of-Experts (MoE) architecture. Existing sequence diffusion models suffer from prohibitive computational and memory costs when scaling to long documents, largely due to dense attention and slow iterative reconstruction. MoE-DiffuSeq addresses these limitations by combining expert routing with a tailored sparse attention mechanism, substantially reducing attention complexity while preserving global coherence and textual fidelity. In addition, we introduce a \emph{soft absorbing state} within the diffusion process that reshapes attention dynamics during denoising, enabling faster sequence reconstruction and more precise token refinement. This design accelerates both training and sampling without sacrificing generation quality. Extensive experiments on long-document benchmarks demonstrate that MoE-DiffuSeq consistently outperforms prior diffusion-based and sparse-attention baselines in training efficiency, inference speed, and generation quality. Our approach is particularly effective for long-context applications such as scientific document generation, large-scale code synthesis, and extended dialogue modeling, establishing a scalable and expressive solution for diffusion-based long-form text generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Paraphrase Detection | QQP (test) | Accuracy95.3 | 51 | |
| Abstractive Summarization | arXiv | ROUGE-144.41 | 7 | |
| Dialogue Generation | Commonsense Conversation Dataset | BLEU4.9 | 6 | |
| Multi-hop Question Answering | HotpotQA | Answer EM72.88 | 3 |