USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning
About
Existing literature typically treats style-driven and subject-driven generation as two disjoint tasks: the former prioritizes stylistic similarity, whereas the latter insists on subject consistency, resulting in an apparent antagonism. We argue that both objectives can be unified under a single framework because they ultimately concern the disentanglement and re-composition of content and style, a long-standing theme in style-driven research. To this end, we present USO, a Unified Style-Subject Optimized customization model. First, we construct a large-scale triplet dataset consisting of content images, style images, and their corresponding stylized content images. Second, we introduce a disentangled learning scheme that simultaneously aligns style features and disentangles content from style through two complementary objectives, style-alignment training and content-style disentanglement training. Third, we incorporate a style reward-learning paradigm denoted as SRL to further enhance the model's performance. Finally, we release USO-Bench, the first benchmark that jointly evaluates style similarity and subject fidelity across multiple metrics. Extensive experiments demonstrate that USO achieves state-of-the-art performance among open-source models along both dimensions of subject consistency and style similarity. Code and model: https://github.com/bytedance/USO
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-image Reasoning | OmniContext | Single Scene Char Score8.03 | 20 | |
| Subject-driven image generation | SconeEval | Composition Single COM8.03 | 11 | |
| Style-driven Generation | Multi-task Image-driven Generation Evaluation Set | CSD51.6 | 6 | |
| Image-driven Generation | 3SGen-Bench | Subject Fidelity Score6.95 | 6 | |
| Subject-driven generation | Multi-task Image-driven Generation Evaluation Set | CLIP-I0.617 | 6 |