Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cache & Distil: Optimising API Calls to Large Language Models

About

Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries. To curtail the frequency of these calls, one can employ a smaller language model -- a student -- which is continuously trained on the responses of the LLM. This student gradually gains proficiency in independently handling an increasing number of user requests, a process we term neural caching. The crucial element in neural caching is a policy that decides which requests should be processed by the student alone and which should be redirected to the LLM, subsequently aiding the student's learning. In this study, we focus on classification tasks, and we consider a range of classic active learning-based selection criteria as the policy. Our experiments suggest that Margin Sampling and Query by Committee bring consistent benefits across tasks and budgets.

Guillem Ram\'irez, Matthias Lindemann, Alexandra Birch, Ivan Titov• 2023

Related benchmarks

TaskDatasetResultRank
Neural CachingISEAR
Online Accuracy (AUC)0.666
9
Neural CachingRT-Polarity
Online Accuracy (AUC)0.896
9
Neural CachingFEVER
Online Accuracy (AUC)75.3
9
Neural CachingOpenbook
Online Accuracy (AUC)73.7
9
Neural caching with student retrainingISEAR
AUC60.9
5
Neural caching with student retrainingRT-Polarity
AUC0.885
5
Neural caching with student retrainingFEVER
AUC68.7
5
Neural caching with student retrainingOpenbook
AUC64.7
5
Showing 8 of 8 rows

Other info

Code

Follow for update