Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Transcription: Unified Audio Schema for Perception-Aware AudioLLMs

About

Recent Audio Large Language Models (AudioLLMs) exhibit a striking performance inversion: while excelling at complex reasoning tasks, they consistently underperform on fine-grained acoustic perception. We attribute this gap to a fundamental limitation of ASR-centric training, which provides precise linguistic targets but implicitly teaches models to suppress paralinguistic cues and acoustic events as noise. To address this, we propose Unified Audio Schema (UAS), a holistic and structured supervision framework that organizes audio information into three explicit components -- Transcription, Paralinguistics, and Non-linguistic Events -- within a unified JSON format. This design achieves comprehensive acoustic coverage without sacrificing the tight audio-text alignment that enables reasoning. We validate the effectiveness of this supervision strategy by applying it to both discrete and continuous AudioLLM architectures. Extensive experiments on MMSU, MMAR, and MMAU demonstrate that UAS-Audio yields consistent improvements, boosting fine-grained perception by 10.9% on MMSU over the same-size state-of-the-art models while preserving robust reasoning capabilities. Our code and model are publicly available at https://github.com/Tencent/Unified_Audio_Schema.

Linhao Zhang, Yuhan Song, Aiwei Liu, Chuhan Wu, Sijun Zhang, Wei Jia, Yuan Liu, Houfeng Wang, Xiao Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Audio UnderstandingMMSU
Perception Score55.7
32
Audio UnderstandingMMAR (comprehensive evaluation)
Sound Score58.8
25
Text-to-SpeechSeed-TTS EN
WER1.7
20
Text-to-SpeechSeed-TTS ZH
WER1.4
12
Audio UnderstandingMMAU
Speech Score67
6
Showing 5 of 5 rows

Other info

Follow for update