Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers

About

We present the Hourglass Diffusion Transformer (HDiT), an image generative model that exhibits linear scaling with pixel count, supporting training at high-resolution (e.g. $1024 \times 1024$) directly in pixel-space. Building on the Transformer architecture, which is known to scale to billions of parameters, it bridges the gap between the efficiency of convolutional U-Nets and the scalability of Transformers. HDiT trains successfully without typical high-resolution training techniques such as multiscale architectures, latent autoencoders or self-conditioning. We demonstrate that HDiT performs competitively with existing models on ImageNet $256^2$, and sets a new state-of-the-art for diffusion models on FFHQ-$1024^2$.

Katherine Crowson, Stefan Andreas Baumann, Alex Birch, Tanishq Mathew Abraham, Daniel Z. Kaplan, Enrico Shippole• 2024

Related benchmarks

TaskDatasetResultRank
dMRI SynthesisHCP Arbitrary b_n condition S1200 (test)
SSIM (%)82.71
16
dMRI SynthesisHCP b_n = 1000 s/mm² condition S1200 (test)
SSIM (%)86.78
8
dMRI SynthesisHCP b_n = 3000 s/mm² condition S1200 (test)
SSIM0.8119
8
Showing 3 of 3 rows

Other info

Follow for update