FireRedASR2S: A State-of-the-Art Industrial-Grade All-in-One Automatic Speech Recognition System
About
We present FireRedASR2S, a state-of-the-art industrial-grade all-in-one automatic speech recognition (ASR) system. It integrates four modules in a unified pipeline: ASR, Voice Activity Detection (VAD), Spoken Language Identification (LID), and Punctuation Prediction (Punc). All modules achieve SOTA performance on the evaluated benchmarks: FireRedASR2: An ASR module with two variants, FireRedASR2-LLM (8B+ parameters) and FireRedASR2-AED (1B+ parameters), supporting speech and singing transcription for Mandarin, Chinese dialects and accents, English, and code-switching. Compared to FireRedASR, FireRedASR2 delivers improved recognition accuracy and broader dialect and accent coverage. FireRedASR2-LLM achieves 2.89% average CER on 4 public Mandarin benchmarks and 11.55% on 19 public Chinese dialects and accents benchmarks, outperforming competitive baselines including Doubao-ASR, Qwen3-ASR, and Fun-ASR. FireRedVAD: An ultra-lightweight module (0.6M parameters) based on the Deep Feedforward Sequential Memory Network (DFSMN), supporting streaming VAD, non-streaming VAD, and multi-label VAD (mVAD). On the FLEURS-VAD-102 benchmark, it achieves 97.57% frame-level F1 and 99.60% AUC-ROC, outperforming Silero-VAD, TEN-VAD, FunASR-VAD, and WebRTC-VAD. FireRedLID: An Encoder-Decoder LID module supporting 100+ languages and 20+ Chinese dialects and accents. On FLEURS (82 languages), it achieves 97.18% utterance-level accuracy, outperforming Whisper and SpeechBrain. FireRedPunc: A BERT-style punctuation prediction module for Chinese and English. On multi-domain benchmarks, it achieves 78.90% average F1, outperforming FunASR-Punc (62.77%). To advance research in speech processing, we release model weights and code at https://github.com/FireRedTeam/FireRedASR2S.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | AISHELL-1 (test) | CER0.57 | 97 | |
| Language Identification | FLEURS (test) | Accuracy97.18 | 6 | |
| Language Identification | CommonVoice (test) | Accuracy92.07 | 6 | |
| Automatic Speech Recognition | 24 public ASR Avg-All-24 (test) | CER9.67 | 5 | |
| Automatic Speech Recognition | Avg-Mandarin-4 (test) | CER2.89 | 5 | |
| Automatic Speech Recognition | 19 Chinese Dialect ASR Avg-Dialect-19 (test) | CER (%)11.55 | 5 | |
| Automatic Speech Recognition | WenetSpeech Internet domain (ws-net) | CER4.44 | 5 | |
| Automatic Speech Recognition | WenetSpeech Meeting domain (ws-meeting) | CER4.32 | 5 | |
| Singing Lyrics Recognition | Opencpop Sing-1 | CER1.12 | 5 | |
| Voice Activity Detection | FLEURS-VAD-102 (test) | F1 Score97.57 | 5 |