Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FireRedASR2S: A State-of-the-Art Industrial-Grade All-in-One Automatic Speech Recognition System

About

We present FireRedASR2S, a state-of-the-art industrial-grade all-in-one automatic speech recognition (ASR) system. It integrates four modules in a unified pipeline: ASR, Voice Activity Detection (VAD), Spoken Language Identification (LID), and Punctuation Prediction (Punc). All modules achieve SOTA performance on the evaluated benchmarks: FireRedASR2: An ASR module with two variants, FireRedASR2-LLM (8B+ parameters) and FireRedASR2-AED (1B+ parameters), supporting speech and singing transcription for Mandarin, Chinese dialects and accents, English, and code-switching. Compared to FireRedASR, FireRedASR2 delivers improved recognition accuracy and broader dialect and accent coverage. FireRedASR2-LLM achieves 2.89% average CER on 4 public Mandarin benchmarks and 11.55% on 19 public Chinese dialects and accents benchmarks, outperforming competitive baselines including Doubao-ASR, Qwen3-ASR, and Fun-ASR. FireRedVAD: An ultra-lightweight module (0.6M parameters) based on the Deep Feedforward Sequential Memory Network (DFSMN), supporting streaming VAD, non-streaming VAD, and multi-label VAD (mVAD). On the FLEURS-VAD-102 benchmark, it achieves 97.57% frame-level F1 and 99.60% AUC-ROC, outperforming Silero-VAD, TEN-VAD, FunASR-VAD, and WebRTC-VAD. FireRedLID: An Encoder-Decoder LID module supporting 100+ languages and 20+ Chinese dialects and accents. On FLEURS (82 languages), it achieves 97.18% utterance-level accuracy, outperforming Whisper and SpeechBrain. FireRedPunc: A BERT-style punctuation prediction module for Chinese and English. On multi-domain benchmarks, it achieves 78.90% average F1, outperforming FunASR-Punc (62.77%). To advance research in speech processing, we release model weights and code at https://github.com/FireRedTeam/FireRedASR2S.

Kaituo Xu, Yan Jia, Kai Huang, Junjie Chen, Wenpeng Li, Kun Liu, Feng-Long Xie, Xu Tang, Yao Hu• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionAISHELL-1 (test)
CER0.57
97
Language IdentificationFLEURS (test)
Accuracy97.18
6
Language IdentificationCommonVoice (test)
Accuracy92.07
6
Automatic Speech Recognition24 public ASR Avg-All-24 (test)
CER9.67
5
Automatic Speech RecognitionAvg-Mandarin-4 (test)
CER2.89
5
Automatic Speech Recognition19 Chinese Dialect ASR Avg-Dialect-19 (test)
CER (%)11.55
5
Automatic Speech RecognitionWenetSpeech Internet domain (ws-net)
CER4.44
5
Automatic Speech RecognitionWenetSpeech Meeting domain (ws-meeting)
CER4.32
5
Singing Lyrics RecognitionOpencpop Sing-1
CER1.12
5
Voice Activity DetectionFLEURS-VAD-102 (test)
F1 Score97.57
5
Showing 10 of 13 rows

Other info

GitHub

Follow for update