UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward
About
Recent advancements in image customization exhibit a wide range of application prospects due to stronger customization capabilities. However, since we humans are more sensitive to faces, a significant challenge remains in preserving consistent identity while avoiding identity confusion with multi-reference images, limiting the identity scalability of customization models. To address this, we present UMO, a Unified Multi-identity Optimization framework, designed to maintain high-fidelity identity preservation and alleviate identity confusion with scalability. With "multi-to-multi matching" paradigm, UMO reformulates multi-identity generation as a global assignment optimization problem and unleashes multi-identity consistency for existing image customization methods generally through reinforcement learning on diffusion models. To facilitate the training of UMO, we develop a scalable customization dataset with multi-reference images, consisting of both synthesised and real parts. Additionally, we propose a new metric to measure identity confusion. Extensive experiments demonstrate that UMO not only improves identity consistency significantly, but also reduces identity confusion on several image customization methods, setting a new state-of-the-art among open-source methods along the dimension of identity preserving. Code and model: https://github.com/bytedance/UMO
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reference-based multi-human generation | MultiHuman TestBench | Count70.5 | 14 | |
| Identity-Preserving Text-to-Image Generation | IBench 41 prompts 100 IDs | Aesthetic Score66.9 | 7 | |
| Identity Customization | IBench ChineseID editable long prompts | Aesthetic Score0.669 | 6 | |
| Personalized Text-to-Image Generation | IBench ChineseID | Aesthetic Score0.6689 | 6 | |
| Multi-human generation | MultiID-2M (test) | Multi-ID (Ref)0.475 | 5 |