Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Esoteric Language Models: Bridging Autoregressive and Masked Diffusion LLMs

About

Diffusion-based language models offer a compelling alternative to autoregressive (AR) models by enabling parallel and controllable generation. Within this family, Masked Diffusion Models (MDMs) currently perform best but still underperform AR models in perplexity and lack key inference-time efficiency features, most notably KV caching. We introduce Eso-LMs, a new family of models that fuses AR and MDM paradigms, smoothly interpolating between their perplexities while overcoming their respective limitations. Unlike prior work, which uses transformers with bidirectional attention as MDM denoisers, we exploit the connection between MDMs and Any-Order autoregressive models and adopt causal attention. This design lets us compute the exact likelihood of MDMs for the first time and, crucially, enables us \to introduce KV caching for MDMs while preserving parallel generation for the first time, significantly improving inference efficiency. Combined with an optimized sampling schedule, Eso-LMs achieves a new state of the art on the speed-quality Pareto frontier for unconditional generation. On long contexts, it yields $\mathbf{14 - 65{}\times}$ faster inference than standard MDMs and $\mathbf{3 - 4{}\times}$ faster inference than prior semi-autoregressive approaches. We provide code, model checkpoints, and a video tutorial on the project page: https://s-sahoo.com/Eso-LMs.

Subham Sekhar Sahoo, Zhihan Yang, Yash Akhauri, Johnna Liu, Deepansha Singh, Zhoujun Cheng, Zhengzhong Liu, Eric Xing, John Thickstun, Arash Vahdat• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingPTB (val)
Perplexity97.46
83
Language ModelingWikiText (val)
Perplexity35.65
34
Language ModelingLM1B L=128 (test)
NELBO PPL24.53
17
Language ModelingLM1B (val)
Perplexity60.11
17
Language ModelingOWT L=1024 (test)
NELBO PPL21.92
11
Language ModelingAG News (val)
Perplexity65.26
8
Language ModelingLAMBADA (val)
Perplexity57.33
8
Language ModelingPubMed (val)
Perplexity60.2
8
Language ModelingArXiv (val)
Perplexity53.78
8
Generation LatencyOWT L=2048
Sampling Latency (s)14.6
5
Showing 10 of 12 rows

Other info

Follow for update