AutoMix: Automatically Mixing Language Models
About
Large language models (LLMs) are now available from cloud API providers in various sizes and configurations. While this diversity offers a broad spectrum of choices, effectively leveraging the options to optimize computational cost and performance remains challenging. In this work, we present Automix, an approach that strategically routes queries to larger LMs, based on the approximate correctness of outputs from a smaller LM. Central to Automix are two key technical contributions. First, it has a few-shot self-verification mechanism, which estimates the reliability of its own outputs without requiring extensive training. Second, given that self-verification can be noisy, it employs a POMDP based router that can effectively select an appropriately sized model, based on answer confidence. Experiments across five language models and five challenging datasets show that Automix consistently surpasses strong baselines, reducing computational cost by over 50% for comparable performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | Chest X-ray VQA (test) | Overall Accuracy24.27 | 43 | |
| Computer-Aided Diagnosis (CAD) | VinDr | AUC0.3706 | 32 | |
| Disease Diagnosis | Open-i | Accuracy51.14 | 30 | |
| Visual Grounding | Chest X-ray Visual Grounding | Aortic Enlargement Score65.73 | 19 | |
| Conversational Question Answering | CoQA | AIBC86.5 | 12 | |
| Dialogue Reasoning | DIPLOMAT | AIBC Score168.2 | 12 | |
| Dialogue Reasoning | MuTual | AIBC Score0.467 | 12 | |
| Out-of-domain Generalization | Diplomat, Mutual, Quality, CoQA, and Qasper Out-of-Domain Average (test) | Score70.9 | 9 |