Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CORD: Bridging the Audio-Text Reasoning Gap via Weighted On-policy Cross-modal Distillation

About

Large Audio Language Models (LALMs) have garnered significant research interest. Despite being built upon text-based large language models (LLMs), LALMs frequently exhibit a degradation in knowledge and reasoning capabilities. We hypothesize that this limitation stems from the failure of current training paradigms to effectively bridge the acoustic-semantic gap within the feature representation space. To address this challenge, we propose CORD, a unified alignment framework that performs online cross-modal self-distillation. Specifically, it aligns audio-conditioned reasoning with its text-conditioned counterpart within a unified model. Leveraging the text modality as an internal teacher, CORD performs multi-granularity alignment throughout the audio rollout process. At the token level, it employs on-policy reverse KL divergence with importance-aware weighting to prioritize early and semantically critical tokens. At the sequence level, CORD introduces a judge-based global reward to optimize complete reasoning trajectories via Group Relative Policy Optimization (GRPO). Empirical results across multiple benchmarks demonstrate that CORD consistently enhances audio-conditioned reasoning and substantially bridges the audio-text performance gap with only 80k synthetic training samples, validating the efficacy and data efficiency of our on-policy, multi-level cross-modal alignment approach.

Jing Hu, Danxiang Zhu, Xianlong Luo, Dan Zhang, Shuwei He, Yishu Lei, Haitao Zheng, Shikun Feng, Jingzhou He, Yu Sun, Hua Wu, Haifeng Wang• 2026

Related benchmarks

TaskDatasetResultRank
Audio UnderstandingMMAU (test)
Speech Score55.42
25
Audio-conditioned reasoningMMSU
Acc57.63
8
Audio-conditioned reasoningOBQA
Accuracy77.74
8
Audio-conditioned reasoningGSM8K
Accuracy47.56
8
Showing 4 of 4 rows

Other info

Follow for update