SGLang: Efficient Execution of Structured Language Model Programs
About
Large language models (LLMs) are increasingly used for complex tasks that require multiple generation calls, advanced prompting techniques, control flow, and structured inputs/outputs. However, efficient systems are lacking for programming and executing these applications. We introduce SGLang, a system for efficient execution of complex language model programs. SGLang consists of a frontend language and a runtime. The frontend simplifies programming with primitives for generation and parallelism control. The runtime accelerates execution with novel optimizations like RadixAttention for KV cache reuse and compressed finite state machines for faster structured output decoding. Experiments show that SGLang achieves up to 6.4x higher throughput compared to state-of-the-art inference systems on various large language and multi-modal models on tasks including agent control, logical reasoning, few-shot learning benchmarks, JSON decoding, retrieval-augmented generation pipelines, and multi-turn chat. The code is publicly available at https://github.com/sgl-project/sglang
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-SQL | BIRD (dev) | Execution Accuracy (EA)61.3 | 217 | |
| Text-to-SQL | Spider (dev) | -- | 100 | |
| LLM Decoding | Llama 70B 3.1 | Throughput3.02e+3 | 48 | |
| LLM Decoding | Llama 70B (H100 GPU Cluster) 3.1 | Throughput878.1 | 27 | |
| Decoding | Llama 70B 3.1 (inference) | Throughput1.39e+3 | 21 | |
| Hybrid Retrieval-Augmented Generation | Hybrid RAG | TTFT (s)0.46 | 20 | |
| End-to-End Inference | LMSys-Chat-1M and ShareGPT traces | p99 TTFT (ms)84.87 | 18 | |
| Output Equivalence | Qwen3 | Exact Match47.7 | 13 | |
| Output Equivalence | Vicuna | Exact Match69.8 | 13 | |
| Multi-session Retrieval-Augmented Generation | QASPER (test) | F1 Score36 | 12 |