Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing

About

Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) devices, tend to lag behind in terms of response quality. Therefore in this work we propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality. Our approach uses a router that assigns queries to the small or large model based on the predicted query difficulty and the desired quality level. The desired quality level can be tuned dynamically at test time to seamlessly trade quality for cost as per the scenario requirements. In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality.

Dujian Ding, Ankur Mallick, Chi Wang, Robert Sim, Subhabrata Mukherjee, Victor Ruhle, Laks V.S. Lakshmanan, Ahmed Hassan Awadallah• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringChest X-ray VQA (test)
Overall Accuracy42.69
43
Computer-Aided Diagnosis (CAD)VinDr
AUC0.4666
32
Disease DiagnosisOpen-i
Accuracy66.13
30
LLM RoutingMMLU, CMMLU, etc. In-distribution
Performance56.65
21
LLM RoutingCEVAL and GSM8K (OOD)
Performance63.79
21
Visual GroundingChest X-ray Visual Grounding
Aortic Enlargement Score59.88
19
Dialogue ReasoningDIPLOMAT
AIBC Score67.1
12
Conversational Question AnsweringCoQA
AIBC20.1
12
Dialogue ReasoningMuTual
AIBC Score0.034
12
Out-of-domain GeneralizationDiplomat, Mutual, Quality, CoQA, and Qasper Out-of-Domain Average (test)
Score7.6
9
Showing 10 of 10 rows

Other info

Follow for update