Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

USP: A Unified Sequence Parallelism Approach for Long Context Generative AI

About

Sequence parallelism (SP), which divides the sequence dimension of input tensors across multiple computational devices, is becoming key to unlocking the long-context capabilities of generative AI models. This paper investigates the state-of-the-art SP approaches, i.e. DeepSpeed-Ulysses and Ring-Attention, and proposes a unified SP approach, which is more robust to transformer model architectures and network hardware topology. This paper compares the communication and memory cost of SP and existing parallelism, including data/tensor/zero/pipeline parallelism, and discusses the best practices for designing hybrid 4D parallelism involving SP. We achieved 47% MFU on two 8xA800 nodes using SP for the LLAMA3-8B model training using sequence length 208K. Our code is publicly available at https://github.com/feifeibear/long-context-attention.

Jiarui Fang, Shangchun Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationFLUX.1 12B inference 1.0 (dev)
Inference Time (s)5.22
6
Multimodal InferenceQwen-Image (inference)
Inference Latency (s)16.33
2
Showing 2 of 2 rows

Other info

Follow for update