Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale

About

The past several years have witnessed the success of transformer-based models, and their scale and application scenarios continue to grow aggressively. The current landscape of transformer models is increasingly diverse: the model size varies drastically with the largest being of hundred-billion parameters; the model characteristics differ due to the sparsity introduced by the Mixture-of-Experts; the target application scenarios can be latency-critical or throughput-oriented; the deployment hardware could be single- or multi-GPU systems with different types of memory and storage, etc. With such increasing diversity and the fast-evolving pace of transformer models, designing a highly performant and efficient inference system is extremely challenging. In this paper, we present DeepSpeed Inference, a comprehensive system solution for transformer model inference to address the above-mentioned challenges. DeepSpeed Inference consists of (1) a multi-GPU inference solution to minimize latency while maximizing the throughput of both dense and sparse transformer models when they fit in aggregate GPU memory, and (2) a heterogeneous inference solution that leverages CPU and NVMe memory in addition to the GPU memory and compute to enable high inference throughput with large models which do not fit in aggregate GPU memory. DeepSpeed Inference reduces latency by up to 7.3X over the state-of-the-art for latency-oriented scenarios and increases throughput by over 1.5x for throughput-oriented scenarios. Moreover, it enables trillion parameter scale inference under real-time latency constraints by leveraging hundreds of GPUs, an unprecedented scale for inference. It can inference 25x larger models than with GPU-only solutions, while delivering a high throughput of 84 TFLOPS (over $50\%$ of A6000 peak).

Reza Yazdani Aminabadi, Samyam Rajbhandari, Minjia Zhang, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Jeff Rasley, Shaden Smith, Olatunji Ruwase, Yuxiong He• 2022

Related benchmarks

TaskDatasetResultRank
Generation throughputSynthetic Generation Workload 512 prompt + 1024 generation tokens
Throughput (tokens/s)10.1
8
Generation throughputSynthetic Generation Workload 512 prompt + 32 generation tokens
Throughput (tokens/s)10.2
8
Generation throughputSynthetic Generation Workload 512 prompt + 512 generation tokens
Throughput (tokens/s)9.6
8
Text GenerationXSum (test)
Throughput (tokens/s)3.52
8
Showing 4 of 4 rows

Other info

Follow for update