Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction

About

Large language models (LLMs) have been driving a new wave of interactive AI applications across numerous domains. However, efficiently serving LLM inference requests is challenging due to their unpredictable execution times originating from the autoregressive nature of generative models. Existing LLM serving systems exploit first-come-first-serve (FCFS) scheduling, suffering from head-of-line blocking issues. To address the non-deterministic nature of LLMs and enable efficient interactive LLM serving, we present a speculative shortest-job-first (SSJF) scheduler that uses a light proxy model to predict LLM output sequence lengths. Our open-source SSJF implementation does not require changes to memory management or batching strategies. Evaluations on real-world datasets and production workload traces show that SSJF reduces average job completion times by 30.5-39.6% and increases throughput by 2.2-3.6x compared to FCFS schedulers, across no batching, dynamic batching, and continuous batching settings.

Haoran Qiu, Weichao Mao, Archit Patke, Shengkun Cui, Saurabh Jha, Chen Wang, Hubertus Franke, Zbigniew T. Kalbarczyk, Tamer Ba\c{s}ar, Ravishankar K. Iyer• 2024

Related benchmarks

TaskDatasetResultRank
Output Length PredictionForeLen LongSeq
MAE174
48
Output Length PredictionForeLen Reasoning
MAE143.8
32
Output Length PredictionForeLen RL
MAE167.2
32
Length PredictionForeLen RL 1.0 (test)
MAE125.4
16
Length PredictionForeLen Avg. 1.0 (test)
MAE240.7
16
Length PredictionForeLen Reasoning 1.0 (test)
MAE221.2
16
Output Length PredictionLMSYS
MAE140.2
16
System Performance EvaluationReasoning
Throughput146.7
8
System Performance EvaluationLong Sequence
Throughput117.9
8
Output Sequence Length PredictionWritingPrompts super-long sequences (> 17k tokens) OOD
MAE280.3
8
Showing 10 of 13 rows

Other info

Follow for update