Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When to Ensemble: Identifying Token-Level Points for Stable and Fast LLM Ensembling

About

Ensembling Large Language Models (LLMs) has gained attention as a promising approach to surpass the performance of individual models by leveraging their complementary strengths. In particular, aggregating models' next-token probability distributions to select the next token has been shown to be effective in various tasks. However, while successful for short-form answers, its application to long-form generation remains underexplored. In this paper, we show that using existing ensemble methods in long-form generation requires a careful choice of ensembling positions, since the standard practice of ensembling at every token often degrades performance. We identify two key factors for determining the ensembling positions: tokenization mismatch across models and consensus in their next-token probability distributions. Based on this, we propose SAFE, (Stable And Fast LLM Ensembling), a framework that selectively ensembles by jointly considering these factors. To further improve stability, we apply a probability sharpening strategy when the ensemble distribution becomes overly smooth, enabling the selection of more confident tokens during ensembling. Our experiments on diverse benchmarks, including MATH500 and BBH, demonstrate that SAFE outperforms existing methods in both accuracy and efficiency, with gains achieved even when ensembling fewer than 1% of tokens.

Heecheol Yun, Kwangmin Ki, Junghyun Lee, Eunho Yang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy86.66
1362
Question AnsweringMMLU-Redux
Accuracy69.99
48
Multiple-choice Question AnsweringMMLU Redux (test)
Accuracy83.79
13
Multiple-choice Question AnsweringMMLU-Redux
Accuracy85.11
4
Showing 4 of 4 rows

Other info

Follow for update