Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FlashSampling: Fast and Memory-Efficient Exact Sampling

About

Sampling from a categorical distribution is mathematically simple, but in large-vocabulary decoding, it often triggers extra memory traffic and extra kernels after the LM head. We present FlashSampling, an exact sampling primitive that fuses sampling into the LM-head matmul and never materializes the logits tensor in HBM. The method is simple: compute logits tile-by-tile on chip, add Gumbel noise, keep only one maximizer per row and per vocabulary tile, and finish with a small reduction over tiles. The fused tiled kernel is exact because $\argmax$ decomposes over a partition; grouped variants for online and tensor-parallel settings are exact by hierarchical factorization of the categorical distribution. Across H100, H200, B200, and B300 GPUs, FlashSampling speeds up kernel-level decode workloads, and in end-to-end vLLM experiments, it reduces time per output token by up to $19%$ on the models we test. These results show that exact sampling, with no approximation, can be integrated into the matmul itself, turning a bandwidth-bound postprocessing step into a lightweight epilogue. Project Page: https://github.com/FlashSampling/FlashSampling.

Tomas Ruiz, Zhen Qin, Yifan Zhang, Xuyang Shen, Yiran Zhong, Mengdi Wang• 2026

Related benchmarks

TaskDatasetResultRank
Kernel SpeedupSynthetic Large Configuration (D=8192, V=128k)
Speedup1.88
108
Fused Matmul and SamplingSynthetic D=4096, V=151k
Speedup vs Multinomial Sampling1.98
36
LLM Inference PerformanceSynthetic Poisson Process Requests
TPOT Speedup (%)18.7
28
Showing 3 of 3 rows

Other info

GitHub

Follow for update