Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAIR: Cost-Efficient Multi-Stage ML Pipeline Autoscaling via In-Context Reinforcement Learning

About

Multi-stage ML inference pipelines are difficult to autoscale due to heterogeneous resources, cross-stage coupling, and dynamic bottleneck migration. We present SAIR, an autoscaling framework that uses an LLM as an in-context reinforcement learning controller, improving its policy online from reward-labeled interaction histories without gradient updates. SAIR combines Pareto-dominance reward shaping with a provable separation margin, surprisal-guided experience retrieval for context efficiency, and fine-grained GPU rate control via user-space CUDA interception. We provide regret analysis decomposing error into retrieval coverage and LLM selection components. On four ML serving pipelines under three workload patterns, SAIR achieves the best or tied-best P99 latency and effective resource cost among deployed baselines, improving P99 by up to 50% and reducing effective cost by up to 97% (under GPU rate-control assumptions), with 86% bottleneck detection accuracy and no offline training.

Jianchang Su, Yifan Zhang, Shengkai Lin, Shizhen Zhao, Yusheng Zheng, Yiwei Yang, Wei Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImgCls
Cost per 1K Requests ($)0.024
15
Natural Language ProcessingNLP
Cost per 1K Requests ($)0.007
15
Text GenerationTextGen
Cost per 1K requests ($)0.006
15
Video AnalyticsVIDEO
Cost per 1K Requests0.49
15
Showing 4 of 4 rows

Other info

Follow for update