Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Causal LLM Routing: End-to-End Regret Minimization from Observational Data

About

LLM routing aims to select the most appropriate model for each query, balancing competing performance metrics such as accuracy and cost across a pool of language models. Prior approaches typically adopt a decoupled strategy, where the metrics are first predicted and the model is then selected based on these estimates. This setup is prone to compounding errors and often relies on full-feedback data, where each query is evaluated by all candidate models, which is costly to obtain and maintain in practice. In contrast, we learn from observational data, which records only the outcome of the model actually deployed. We propose a causal end-to-end framework that learns routing policies by minimizing decision-making regret from observational data. To enable efficient optimization, we introduce two theoretically grounded surrogate objectives: a classification-based upper bound, and a softmax-weighted regret approximation shown to recover the optimal policy at convergence. We further extend our framework to handle heterogeneous cost preferences via an interval-conditioned architecture. Experiments on public benchmarks show that our method outperforms existing baselines, achieving state-of-the-art performance across different embedding models.

Asterios Tsiourvas, Wei Sun, Georgia Perakis• 2025

Related benchmarks

TaskDatasetResultRank
LLM RoutingMMR-Bench
nAUC0.6914
11
LLM RoutingRouterBench
nAUC0.7055
11
LLM RoutingRouterBench Out-of-domain
nAUC75.24
9
LLM RoutingMMR-Bench Out-of-domain
nAUC0.6464
9
Showing 4 of 4 rows

Other info

Follow for update