Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Moshi: a speech-text foundation model for real-time dialogue

About

We introduce Moshi, a speech-text foundation model and full-duplex spoken dialogue framework. Current systems for spoken dialogue rely on pipelines of independent components, namely voice activity detection, speech recognition, textual dialogue and text-to-speech. Such frameworks cannot emulate the experience of real conversations. First, their complexity induces a latency of several seconds between interactions. Second, text being the intermediate modality for dialogue, non-linguistic information that modifies meaning -- such as emotion or non-speech sounds -- is lost in the interaction. Finally, they rely on a segmentation into speaker turns, which does not take into account overlapping speech, interruptions and interjections. Moshi solves these independent issues altogether by casting spoken dialogue as speech-to-speech generation. Starting from a text language model backbone, Moshi generates speech as tokens from the residual quantizer of a neural audio codec, while modeling separately its own speech and that of the user into parallel streams. This allows for the removal of explicit speaker turns, and the modeling of arbitrary conversational dynamics. We moreover extend the hierarchical semantic-to-acoustic token generation of previous work to first predict time-aligned text tokens as a prefix to audio tokens. Not only this "Inner Monologue" method significantly improves the linguistic quality of generated speech, but we also illustrate how it can provide streaming speech recognition and text-to-speech. Our resulting model is the first real-time full-duplex spoken large language model, with a theoretical latency of 160ms, 200ms in practice, and is available at https://github.com/kyutai-labs/moshi.

Alexandre D\'efossez, Laurent Mazar\'e, Manu Orsini, Am\'elie Royer, Patrick P\'erez, Herv\'e J\'egou, Edouard Grave, Neil Zeghidour• 2024

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER5.7
1156
Code GenerationHumanEval--
1036
Language UnderstandingMMLU
Accuracy49.8
825
Mathematical ReasoningGSM8K
Accuracy (GSM8K)40.3
358
General KnowledgeMMLU
MMLU General Knowledge Accuracy49.8
234
Question AnsweringTriviaQA
Accuracy48.5
112
Automatic Speech RecognitionLibriSpeech Other
WER12
96
Automatic Speech RecognitionLibriSpeech Clean
WER5.5
80
Text-to-SpeechLibriSpeech clean (test)
WER4.7
66
Audio ReconstructionAudioSet (eval)
Mel Distance0.8406
63
Showing 10 of 156 rows
...

Other info

Follow for update