Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AR-MAP: Are Autoregressive Large Language Models Implicit Teachers for Diffusion Large Language Models?

About

Diffusion Large Language Models (DLLMs) have emerged as a powerful alternative to autoregressive models, enabling parallel token generation across multiple positions. However, preference alignment of DLLMs remains challenging due to high variance introduced by Evidence Lower Bound (ELBO)-based likelihood estimation. In this work, we propose AR-MAP, a novel transfer learning framework that leverages preference-aligned autoregressive LLMs (AR-LLMs) as implicit teachers for DLLM alignment. We reveal that DLLMs can effectively absorb alignment knowledge from AR-LLMs through simple weight scaling, exploiting the shared architectural structure between these divergent generation paradigms. Crucially, our approach circumvents the high variance and computational overhead of direct DLLM alignment and comprehensive experiments across diverse preference alignment tasks demonstrate that AR-MAP achieves competitive or superior performance compared to existing DLLM-specific alignment methods, achieving 69.08\% average score across all tasks and models. Our Code is available at https://github.com/AMAP-ML/AR-MAP.

Liang Lin, Feng Xiong, Zengbin Wang, Kun Wang, Junhao Dong, Xuecai Hu, Yong Wang, Xiangxiang Chu• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
292
Mathematical ReasoningMATH 500
pass@174
153
Instruction FollowingAlpacaEval--
125
Factuality EvaluationTruthfulQA--
40
Open-ended generationArena Hard
Score84.6
14
Showing 5 of 5 rows

Other info

Follow for update