Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiDashengLM: Efficient Audio Understanding with General Audio Captions

About

Current approaches for large audio language models (LALMs) often rely on closed data sources or proprietary models, limiting their generalization and accessibility. This paper introduces MiDashengLM, a novel open audio-language model designed for efficient and comprehensive audio understanding through the use of general audio captions using our novel ACAVCaps training dataset. MiDashengLM exclusively relies on publicly available pretraining and supervised fine-tuning (SFT) datasets, ensuring full transparency and reproducibility. At its core, MiDashengLM integrates Dasheng, an open-source audio encoder, specifically engineered to process diverse auditory information effectively. Unlike previous works primarily focused on Automatic Speech Recognition (ASR) based audio-text alignment, our strategy centers on general audio captions, fusing speech, sound and music information into one textual representation, enabling a holistic textual representation of complex audio scenes. Lastly, MiDashengLM provides an up to 4x speedup in terms of time-to-first-token (TTFT) and up to 20x higher throughput than comparable models. Checkpoints are available online at https://huggingface.co/mispeech/midashenglm-7b and https://github.com/xiaomi-research/dasheng-lm.

Heinrich Dinkel, Gang Li, Jizhong Liu, Jian Luan, Yadong Niu, Xingwei Sun, Tianzi Wang, Qiyang Xiao, Junbo Zhang, Jiahao Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Speech RecognitionLongSpeech
WER35.5
8
Temporal Issue LocalizationLongSpeech
St.A0.48
5
Speaker CountLongSpeech
Speaker Count Metric (N.A.)35.31
5
Content SeparationLongSpeech
N.A Score23.75
5
Emotion AnalysisLongSpeech
St.A11.08
5
SummaryLongSpeech
ROUGE-115.22
5
Showing 6 of 6 rows

Other info

Follow for update