Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pyramidal Patchification Flow for Visual Generation

About

Diffusion transformers (DiTs) adopt Patchify, mapping patch representations to token representations through linear projections, to adjust the number of tokens input to DiT blocks and thus the computation cost. Instead of a single patch size for all the timesteps, we introduce a Pyramidal Patchification Flow (PPFlow) approach: Large patch sizes are used for high noise timesteps and small patch sizes for low noise timesteps; Linear projections are learned for each patch size; and Unpatchify is accordingly modified. Unlike Pyramidal Flow, our approach operates over full latent representations other than pyramid representations, and adopts the normal denoising process without requiring the renoising trick. We demonstrate the effectiveness of our approach through two training manners. Training from scratch achieves a $1.6\times$ ($2.0\times$) inference speed over SiT-B/2 for 2-level (3-level) pyramid patchification with slightly lower training FLOPs and similar image generation performance. Training from pretrained normal DiTs achieves even better performance with small training time. The code and checkpoint are at https://github.com/fudan-generative-vision/PPFlow.

Hui Li, Baoyou Chen, Liwei Zhang, Jiaye Li, Jingdong Wang, Siyu Zhu• 2025

Related benchmarks

TaskDatasetResultRank
Class-conditional Image GenerationImageNet 256x256
Inception Score (IS)286.7
441
Text-to-Image GenerationGenEval
GenEval Score68
277
Class-conditional Image GenerationImageNet 256x256 (train val)
FID3.83
178
Text-to-Image GenerationDPG-Bench
DPG Score84
89
Class-conditional Image GenerationImageNet 512x512
FID3.01
72
Text to ImageT2I-CompBench
Color Fidelity75.66
9
Showing 6 of 6 rows

Other info

Follow for update