Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Analyzing and Improving Fast Sampling of Text-to-Image Diffusion Models

About

Text-to-image diffusion models have achieved unprecedented success but still struggle to produce high-quality results under limited sampling budgets. Existing training-free sampling acceleration methods are typically developed independently, leaving the overall performance and compatibility among these methods unexplored. In this paper, we bridge this gap by systematically elucidating the design space, and our comprehensive experiments identify the sampling time schedule as the most pivotal factor. Inspired by the geometric properties of diffusion models revealed through the Frenet-Serret formulas, we propose constant total rotation schedule (TORS), a scheduling strategy that ensures uniform geometric variation along the sampling trajectory. TORS outperforms previous training-free acceleration methods and produces high-quality images with 10 sampling steps on Flux.1-Dev and Stable Diffusion 3.5. Extensive experiments underscore the adaptability of our method to unseen models, hyperparameters, and downstream applications.

Zhenyu Zhou, Defang Chen, Siwei Lyu, Chun Chen, Can Wang• 2026

Related benchmarks

TaskDatasetResultRank
Image EditingPIE-Bench
PSNR27.59
166
Text-to-Image GenerationDrawBench
IR (Similarity Score)97
10
Text-to-Image GenerationDrawBench evaluated with Stable Diffusion 3.5 medium 1.0 (test)
IR0.86
10
Showing 3 of 3 rows

Other info

Follow for update