Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Peeking Into The Future For Contextual Biasing

About

While end-to-end (E2E) automatic speech recognition (ASR) models excel at general transcription, they struggle to recognize rare or unseen named entities (e.g., contact names, locations), which are critical for downstream applications like virtual assistants. In this paper, we propose a contextual biasing method for attention based encoder decoder (AED) models using a list of candidate named entities. Instead of predicting only the next token, we simultaneously predict multiple future tokens, enabling the model to "peek into the future" and score potential candidate entities in the entity list. Moreover, our approach leverages the multi-token prediction logits directly without requiring additional entity encoders or cross-attention layers, significantly reducing architectural complexity. Experiments on Librispeech demonstrate that our approach achieves up to 50.34% relative improvement in named entity word error rate compared to the baseline AED model.

Ramaneswaran Selvakumar, Cindy Tseng, Eesung Kim, Vijendra Raj Apsingekar, Yun Tang• 2025

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech 960h (test-other)
WER5.63
81
Speech RecognitionLibriSpeech 960 clean (test)
WER2.27
17
Showing 2 of 2 rows

Other info

Follow for update