Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Distilling an End-to-End Voice Assistant Without Instruction Training Data

About

Voice assistants, such as Siri and Google Assistant, typically model audio and text separately, resulting in lost speech information and increased complexity. Recent efforts to address this with end-to-end Speech Large Language Models (LLMs) trained with supervised finetuning (SFT) have led to models ``forgetting" capabilities from text-only LLMs. Our work proposes an alternative paradigm for training Speech LLMs without instruction data, using the response of a text-only LLM to transcripts as self-supervision. Importantly, this process can be performed without annotated responses. We show that our Distilled Voice Assistant (DiVA) generalizes to Spoken Question Answering, Classification, and Translation. Furthermore, we show that DiVA better meets user preferences, achieving a 72\% win rate compared with state-of-the-art models like Qwen 2 Audio, despite using $>$100x less training compute.

William Held, Ella Li, Michael Ryan, Weiyan Shi, Yanzhe Zhang, Diyi Yang• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy67.8
1891
Question AnsweringARC Challenge
Accuracy81.7
906
Physical Commonsense ReasoningPIQA
Accuracy70
572
Story completionStoryCloze
Accuracy68.6
73
Common Sense ReasoningPIQA
Accuracy80.8
71
Commonsense ReasoningStoryCloze
Accuracy80.9
34
Science Question AnsweringARC-C
Accuracy45.9
32
General Audio UnderstandingVoiceBench
AlpacaEval Score3.67
19
Multi-task KnowledgeMMSU
Accuracy36.1
11
OpenBook Question AnsweringOBQA
Accuracy0.409
11
Showing 10 of 16 rows

Other info

Follow for update