Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing
About
Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) devices, tend to lag behind in terms of response quality. Therefore in this work we propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality. Our approach uses a router that assigns queries to the small or large model based on the predicted query difficulty and the desired quality level. The desired quality level can be tuned dynamically at test time to seamlessly trade quality for cost as per the scenario requirements. In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | Chest X-ray VQA (test) | Overall Accuracy42.69 | 43 | |
| Computer-Aided Diagnosis (CAD) | VinDr | AUC0.4666 | 32 | |
| Disease Diagnosis | Open-i | Accuracy66.13 | 30 | |
| LLM Routing | MMLU, CMMLU, etc. In-distribution | Performance56.65 | 21 | |
| LLM Routing | CEVAL and GSM8K (OOD) | Performance63.79 | 21 | |
| Visual Grounding | Chest X-ray Visual Grounding | Aortic Enlargement Score59.88 | 19 | |
| Dialogue Reasoning | DIPLOMAT | AIBC Score67.1 | 12 | |
| Conversational Question Answering | CoQA | AIBC20.1 | 12 | |
| Dialogue Reasoning | MuTual | AIBC Score0.034 | 12 | |
| Out-of-domain Generalization | Diplomat, Mutual, Quality, CoQA, and Qasper Out-of-Domain Average (test) | Score7.6 | 9 |