Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evo: Autoregressive-Diffusion Large Language Models with Evolving Balance

About

We introduce \textbf{Evo}, a duality latent trajectory model that bridges autoregressive (AR) and diffusion-based language generation within a continuous evolutionary generative framework. Rather than treating AR decoding and diffusion generation as separate paradigms, Evo reconceptualizes text generation as a latent flow: each token is associated with a vector-valued embedding that evolves over a progression variable $t_i \in [0, 1]$, indicating its semantic maturity. Low $t_i$ values correspond to confident AR-like refinement, while high values invoke diffusion-style planning, allowing the model to adaptively balance AR and diffusion based on uncertainty. Theoretically, we show that both AR and diffusion models emerge as discretizations of a shared probability flow, and we derive Evo's training objective from a unified variational ELBO. The model is implemented as a time-conditioned Transformer governed by a shared vector field, trained end-to-end to jointly infer latent codes and their progression times. During decoding, Evo performs efficient, semantics-aware refinement, achieving high-quality outputs without sacrificing speed. Empirically, Evo 8B achieves state-of-the-art or highly competitive results on 15 diverse benchmarks, including reasoning (GSM8K, ARC-C), code generation (HumanEval, MBPP), and general language understanding, while maintaining fast inference speed. Our results demonstrate that Evo delivers a new paradigm for LLM design with strong generation quality, robust symbolic reasoning, and decoding efficiency.

Junde Wu, Minhao Hu, Jiayuan Zhu, Yuyuan Liu, Tianyi Zhang, Kang Li, Jingkun Chen, Jiazhen Pan, Min Xu, Yueming Jin• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy82.1
1891
Commonsense ReasoningWinoGrande
Accuracy76.3
1085
Code GenerationHumanEval--
1036
Question AnsweringARC Challenge
Accuracy65.6
906
Language UnderstandingMMLU
Accuracy78.6
825
ReasoningBBH
Accuracy68.4
672
Physical Commonsense ReasoningPIQA
Accuracy81.2
572
Common Sense ReasoningHellaSwag
Accuracy86.4
213
Scientific ReasoningGPQA
Accuracy39.1
75
Question AnsweringMMLU
Accuracy76.8
46
Showing 10 of 18 rows

Other info

Follow for update