Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SDiT: Semantic Region-Adaptive for Diffusion Transformers

About

Diffusion Transformers (DiTs) achieve state-of-the-art performance in text-to-image synthesis but remain computationally expensive due to the iterative nature of denoising and the quadratic cost of global attention. In this work, we observe that denoising dynamics are spatially non-uniform-background regions converge rapidly while edges and textured areas evolve much more actively. Building on this insight, we propose SDiT, a Semantic Region-Adaptive Diffusion Transformer that allocates computation according to regional complexity. SDiT introduces a training-free framework combining (1) semantic-aware clustering via fast Quickshift-based segmentation, (2) complexity-driven regional scheduling to selectively update informative areas, and (3) boundary-aware refinement to maintain spatial coherence. Without any model retraining or architectural modification, SDiT achieves up to 3.0x acceleration while preserving nearly identical perceptual and semantic quality to full-attention inference.

Bowen Lin, Fanjiang Ye, Yihua Liu, Zhenghui Guo, Boyuan Zhang, Weijian Zheng, Yufan Xu, Tiancheng Xing, Yuke Wang, Chengming Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Image GenerationMS-COCO 2017 (val)
FID35.85
80
Image GenerationMS COCO 2017
Inference Time (s)3.72
14
Text-to-Image GenerationDalle3 HQC 1M+ (5,000 pairs)
FID49.9
4
Showing 3 of 3 rows

Other info

Follow for update