Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Token Merging for Fast Stable Diffusion

About

The landscape of image generation has been forever changed by open vocabulary diffusion models. However, at their core these models use transformers, which makes generation slow. Better implementations to increase the throughput of these transformers have emerged, but they still evaluate the entire model. In this paper, we instead speed up diffusion models by exploiting natural redundancy in generated images by merging redundant tokens. After making some diffusion-specific improvements to Token Merging (ToMe), our ToMe for Stable Diffusion can reduce the number of tokens in an existing Stable Diffusion model by up to 60% while still producing high quality images without any extra training. In the process, we speed up image generation by up to 2x and reduce memory consumption by up to 5.6x. Furthermore, this speed-up stacks with efficient implementations such as xFormers, minimally impacting quality while being up to 5.4x faster for large images. Code is available at https://github.com/dbolya/tomesd.

Daniel Bolya, Judy Hoffman• 2023

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)--
1130
Text-to-Image GenerationMS-COCO 2017 (val)
FID22.32
80
Video Object SegmentationSA-V (val)
J&F Score71.6
74
Video Object SegmentationSA-V (test)
J&F71.7
70
Video Object SegmentationMOSE (val)
J&F Score66.1
45
Text-to-Image GenerationPartiPrompts
CLIP Score0.3108
26
Video Object SegmentationSA-V SAM2.1-B+ (test)
J&F63.1
22
Video Object SegmentationSA-V SAM2.1-B+ (val)
J&F Score65
22
Video Object SegmentationDAVIS 2017 SAM2.1-B+ (val)
J&F Score80.9
22
Video Object SegmentationMOSE SAM2.1-B+ (val)
J&F Score59.5
22
Showing 10 of 17 rows

Other info

Follow for update