XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation
About
Achieving fine-grained control over subject identity and semantic attributes (pose, style, lighting) in text-to-image generation, particularly for multiple subjects, often undermines the editability and coherence of Diffusion Transformers (DiTs). Many approaches introduce artifacts or suffer from attribute entanglement. To overcome these challenges, we propose a novel multi-subject controlled generation model XVerse. By transforming reference images into offsets for token-specific text-stream modulation, XVerse allows for precise and independent control for specific subject without disrupting image latents or features. Consequently, XVerse offers high-fidelity, editable multi-subject image synthesis with robust control over individual subject characteristics and semantic attributes. This advancement significantly improves personalized and complex scene generation capabilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Personalized Text-to-Image Generation | DreamBench++ Single-subject | CP0.643 | 18 | |
| Image Personalization | User Study Personalization Tasks | Concept Preservation (CP)82.5 | 17 | |
| Reference-based multi-human generation | MultiHuman TestBench | Count81.7 | 14 | |
| Identity-Preserving Multi-subject Image Generation | LAMICBench++ Fewer Subjects | ITC77.65 | 12 | |
| Identity-Preserving Multi-subject Image Generation | LAMICBench++ More Subjects | ITC43.48 | 12 | |
| Text+object to image generation | MoCA | NIQE2.624 | 7 | |
| Personalized Text-to-Image Generation | DynaIP-Bench Multi-subject | CP54.8 | 7 | |
| Compositional generation | XVerse Bench | CLIP Score33.94 | 6 | |
| Compositional generation | Our Bench | CLIP Score32.33 | 6 |