Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RouteLLM: Learning to Route LLMs with Preference Data

About

Large language models (LLMs) exhibit impressive capabilities across a wide range of tasks, yet the choice of which model to use often involves a trade-off between performance and cost. More powerful models, though effective, come with higher expenses, while less capable models are more cost-effective. To address this dilemma, we propose several efficient router models that dynamically select between a stronger and a weaker LLM during inference, aiming to optimize the balance between cost and response quality. We develop a training framework for these routers leveraging human preference data and data augmentation techniques to enhance performance. Our evaluation on widely-recognized benchmarks shows that our approach significantly reduces costs-by over 2 times in certain cases-without compromising the quality of responses. Interestingly, our router models also demonstrate significant transfer learning capabilities, maintaining their performance even when the strong and weak models are changed at test time. This highlights the potential of these routers to provide a cost-effective yet high-performance solution for deploying LLMs.

Isaac Ong, Amjad Almahairi, Vincent Wu, Wei-Lin Chiang, Tianhao Wu, Joseph E. Gonzalez, M Waleed Kadous, Ion Stoica• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@183.85
850
Mathematical ReasoningGSM8K
Accuracy89
351
Reading ComprehensionRACE high
Accuracy79.4
295
Mathematical ReasoningMATH
Accuracy51
162
Visual Question AnsweringChest X-ray VQA (test)
Overall Accuracy63.74
43
Mathematical ReasoningMATH 500
Acc96.4
40
Computer-Aided Diagnosis (CAD)VinDr
AUC0.4071
32
Disease DiagnosisOpen-i
Accuracy33.97
30
Mathematical ReasoningAMC 2023
Accuracy100
26
Mathematical ReasoningGSM8K
Accuracy94.5
26
Showing 10 of 37 rows

Other info

Follow for update