Scheduling LLM Inference with Uncertainty-Aware Output Length Predictions
About
To schedule LLM inference, the \textit{shortest job first} (SJF) principle is favorable by prioritizing requests with short output lengths to avoid head-of-line (HOL) blocking. Existing methods usually predict a single output length for each request to facilitate scheduling. We argue that such a \textit{point estimate} does not match the \textit{stochastic} decoding process of LLM inference, where output length is \textit{uncertain} by nature and determined by when the end-of-sequence (EOS) token is sampled. Hence, the output length of each request should be fitted with a distribution rather than a single value. With an in-depth analysis of empirical data and the stochastic decoding process, we observe that output length follows a heavy-tailed distribution and can be fitted with the log-t distribution. On this basis, we propose a simple metric called Tail Inflated Expectation (TIE) to replace the output length in SJF scheduling, which adjusts the expectation of a log-t distribution with its tail probabilities to account for the risk that a request generates long outputs. To evaluate our TIE scheduler, we compare it with three strong baselines, and the results show that TIE reduces the per-token latency by $2.31\times$ for online inference and improves throughput by $1.42\times$ for offline data generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Synthetic Data Generation | Alpaca | Generation Time (s)68.8 | 44 | |
| Chatbot Workload | ShareGPT | Average PTLA (s/token)0.36 | 36 | |
| Chatbot Workload | LMSYS-Chat-1M | Average PTLA (s/token)0.47 | 28 | |
| Chatbot Workload | Alpaca | Average PTLA (s/token)0.3 | 28 | |
| LLM Inference Scheduling | Alpaca | Average Per-token Latency (s/token)0.52 | 8 | |
| LLM Inference Scheduling | LMSYS-Chat-1M | Average Per-token Latency (s/token)2.41 | 4 |