Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

About

Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend features among subjects. We present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300$\times$-2500$\times$ speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation. Code, model, and dataset are available at https://github.com/mit-han-lab/fastcomposer.

Guangxuan Xiao, Tianwei Yin, William T. Freeman, Fr\'edo Durand, Song Han• 2023

Related benchmarks

TaskDatasetResultRank
Face PersonalizationFaceForensics++ (test)
AdaFace Score0.1736
10
Single-subject image generationCelebV-T
Test Time (s)2
8
Personalized Image GenerationCelebA-HQ (120 identities)
CLIP Score0.265
7
Facial Expression Editing15 subjects (test)
Expression Coefficients0.133
6
Single-subject image generationCeleb-A (test)
Identity Preservation51.41
5
Multi-subject Image GenerationCelebV-T
Identity Preservation0.431
5
Personalized Image GenerationUniversal Recontextualization
CLIP-T28.7
5
Multi-subject Image GenerationCeleb-A
Identity Preservation43.11
4
Personalized Image GenerationFaceForensics++ 100 videos, 40 prompts
FID77.62
4
Personalized Image GenerationUser Study
WAC4.67
4
Showing 10 of 11 rows

Other info

Code

Follow for update