Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speculate, then Collaborate: Fusing Knowledge of Language Models during Decoding

About

Large Language Models (LLMs) often excel in specific domains but fall short in others due to the limitations of their training. Thus, enabling LLMs to solve problems collaboratively by integrating their complementary knowledge promises to improve their performance across domains. To realize this potential, we introduce a novel Collaborative Speculative Decoding (CoSD) algorithm that enables efficient LLM knowledge fusion at test time without requiring additional model training. CoSD employs a draft model to generate initial sequences and an easy-to-learn rule or decision tree to decide when to invoke an assistant model to improve these drafts. CoSD not only enhances knowledge fusion but also improves inference efficiency, is transferable across domains and models, and offers greater explainability. Experimental results demonstrate that CoSD improves accuracy by up to 10\% across benchmarks compared to existing methods, providing a scalable and effective solution for LLM-based applications

Ziyao Wang, Muneeza Azmat, Ang Li, Raya Horesh, Mikhail Yurochkin• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCom^2-hard Intervention (test)
Accuracy8.3
5
Showing 1 of 1 rows

Other info

Follow for update